Seven BICA16-simulations


Seven "Sweet Spots"

Presented BICA-16 


The HaveNWant Architecture continues in the tradition of Minsky's Society of Mind, and fixes a major problem: it was reductionist breaking intelligence into finer and finer pieces, but never said what those pieces were. Although his K-Line theory attempted to fix this, but got messy handling retrograde signaling. In the mid '70's, research was supplanted by Neural Networks. HaveNWant goes back and fixes this, by automatically generating retrograde signals like backprop and reification. It does this by using bidirectional links in a master-slave hierarchy. This paper shows in simulation seven "sweet-spots" that perform needed AI functions. Included is a network for animal brains called the Mazlovian Multiplexor. It generates cascadable and scalable actors.


1. Paper Submitted (8 page, pdf) submitted BICA Journal 4/20/16


2. Video of BICA-16 presentation. Greenwich Village, NYC 7/16/2016 

3. A short video "Trailer" for the paper:



Seven Simulations in the paper:

Video demos of Factal Workbench operating follow: (Software which runs on a Mac can be downloaded in source or binary form)




1. Umbrella is a simple two atom network illustrating the power of bidirectional logic. Two scenarios show this single network being used deductively and inductively.

See also the 3-bit Assimilator in Factal Workbench-Early Simulatios







2. Stepper Motor network: demonstrates how the shaft encoder network of 17 factals can drive a stepper motor, and cause it to spin in either direction on command. 

The second video shows a similar network simulated by the Factal Workbench.  It has 6 poles instead of 3.





3. Shaft Encoder is an assimilator which learns common sequences associated with “forward”, “backward”, and “stopped” rotation. Its operation exemplifies how the Previous Atom enters into associations to encode state transitions. The network itself operates bidirectionally, either reporting shaft rotations sensed at its encoder, or generating shaft rotations. This example is supervised learning, although it can be extended to unsupervised exploiting temporal coherence. .







4. Cat-Dog network performs what Piaget calls Assimilation. An Actor Net starts empty, and a model is learned which associates animals with sounds. The model so produced can be combined to cooperate with other actors communally. Together they can recall full patterns from partial ones, and combined with other networks.




5. Morse Code Learned from experience. When a new patterns of da’s and di’s are experienced, the Write Head adds a new crux to its tree to record it. It then recognizes that pattern. The first video shows two letters being learned, demonstrating exactly how representation trees operate, as they grow from new experiences and recognize previously experienced sequences.





6. Morse Code Generated when a leaf of the tree is stimulated. The pattern of da’s and di’s associated with that node generated. Similar methods may be used for music.

         




7. Four Word Language integrates two shaft encoders with a simple word-pair syntax network and a coordination/language model, and illustrates small models being combined into larger simulations. It acts bidirectionally: This video causes rotations from language. 
 
  • Another (not yet uploaded) generates language from shaft rotations


  • Other Sweet Spots are shown at in the page: Two Application Network Examples -- 2014