of Brain-Gears.Blogspot.com:
If the brain were a car engine, what kind of gears would it contain? What is the structure of repeated forms of the cortex, that house the algorithms of human cognition?
A constructionist framework for robotic intelligence.
Discovery and simulation of seven sweet-spots.
To organize further development.
How might you build the AI of animal thought as an network of repeated forms. The human brain seems made of the repeated forms of nerves or mini columns, just as a motor is made of gears and other things. This site describes the resulting cognitive architecture, used for constructing robotics controllers, also known as brains. The approach used is constructive -- a set of atomic forms is defined and networks of them which work well identified. These sweet spots have the learning abilities described by Piaget, such as assimilation.
Micro-Bidirectionality:
What is the best thing to build a brain out of. Minsky woud ask this of his grad students. I being one. What are the basic computational functions, as MIT's Tomaso Poggio now calls them, of animal thought. Throughout my 50-year career designing hardware systems, I’ve gathered together a synergy set of every-day AI algorithms. It's called HaveNWant, named as a contraction of the opposing forces of having and wanting. In it, bidirectional links connect binary predicates in the hierarchical layered networks. This spreading of the computation in space can break the Von Neumann Bottleneck. If nanotech can synthesize a particular kind of mollecular computing elements, this method can synthesize a human-class AI brain that would fit in your hand.
The concept of "Micro Bidirectionality" is fundamental to HaveNWant, and affects how each bit of information interact with its neighbors in the large layered network of the brain. It describes links where for every output, there is a paired input. This frees an end from knowing anything about the other end, except the value it gives back. This greatly simplifies the network. The two function together and are sometimes called co-variables. (I sometimes think of them as complex numbers, where the imaginary covariable goes backwards.) For example the pair (voltage, current) in the wire of an electronic circuit. When Micro Birectional networks are organized as Directed Acyclic Graphs, they can compute many required functions, like Baesian inference, Q Learning, Distributed Associative Memories, Language Serialization, Subsumption Hierarchies, and more. Such networks automatically generate retrograde functionality, like symbol reification or backpropigation.
The repeated form is called a Factal, a mix of Fractal and Fact. A Factal contains all of the mechanisms needed to support a particular binary predicate in all required cognitive processes. Factals are connected mostly to nearby Factals (as are nerves or gears). The research seeks to build up larger and larger machines, with greater and greater capabilities. The hierarchically compose-able schema it produces creates associations that are the basis for remembering and imagining.
We hope to grow from a simple animal brain to human-like capability using a multi-layered version of Barr's Global Theater. In HaveNWant, it's called the Reenactment Simulator, the first design of which is the Mazlovian Multiplexor. Learning proceeds by build models separately in small domains, and then using them in global simulation. From just a minimal structure, local accretion learning rules and the experience of living in an environment. Separate learning areas for language recognition and generation couple the thinking of many brain's simulators.
The first phase is to define appropriate mechanisms, the second step is to refine these mechanisms with computer simulation, and the third step is to locate the sweet spots from which the emergent properties of animal thought arise.
Publications:
3. 2014 The HaveNWant Schemata
How might the HaveNWant Schema be used to do some higher level tasks. It contains 4 short papers, including one on Language Deserialization (how to implement language in HaveNWant Reenactment Simulator), and on Bindings (how to ground symbols in sensations).
6. 2012 The Grandmother Turtle Schemata
8. 2011 IEEE Robotics Group Talk
The first public presentation of thes ideas, on Tuesday February 8. It covers much of the early motivations. Read more about IEEE Robotics Group Talk
9. 2010 Simple Connectionist Mechanisms of Cognition
A 70-page pdf manuscript, which defines facts and how they act in a semantic network to form forward models in a Reenactment Simulator. Read more about Simple Connectionist Mechanisms of Cognition
10. 2009 Hebbian Arborization
Describes a theoretical method by which nerves may grow to perform Hebbian networks.
Growing sparse Hebbian networks. (2 pages) Read more about Hebbian Arborization
11. 2008 Earlier Works
Available upon request.
- This is a huge effort, and anybody who wants to learn more or help is encouraged to do so. Contact me if you'd like to discuss anything.