Home

of Brain-Gears.Blogspot.com:

If the brain were a car engine, what kind of gears would it contain? What is the structure of repeated forms of the cortex, that house the algorithms of human cognition?

TLDR:
    My Vision:     July 2014 BICA-14 MIT  (15 minutes
A constructionist framework for robotic intelligence.
    My Results:  July 2016 BICA-16 NYC (12 minutes)  
Discovery and simulation of seven sweet-spots.
    My Plan:         June 2022  (6 pages
To organize further development.

MAPS:

Image Gallery
The Map of HaveNWant  (Its 5 Compositional Layers)
Building Machines from Factals   (Cogny Rythms Poster)
Animal Thought:  

How might you build the AI of animal thought as an network of repeated forms. The human brain seems made of the repeated forms of nerves or mini columns, just as a motor is made of gears and other things. This site describes the resulting cognitive architecture, used for constructing robotics controllers, also known as brains. The approach used is constructive -- a set of atomic forms is defined and networks of them which work well identified. These sweet spots have the learning abilities described by Piaget, such as assimilation.

Micro-Bidirectionality:

What is the best thing to build a brain out of. Minsky woud ask this of his grad students. I being one.  What are the basic computational functions, as MIT's Tomaso Poggio now calls them, of animal thought. Throughout my 50-year career designing hardware systems, I’ve gathered together a synergy set of every-day AI algorithms. It's called HaveNWant, named as a contraction of the opposing forces of having and wanting. In it, bidirectional links connect binary predicates in the hierarchical layered networks. This spreading of the computation in space can break the Von Neumann Bottleneck. If nanotech can synthesize a particular kind of mollecular computing elements, this method can synthesize a human-class AI brain that would fit in your hand.


The concept of "Micro Bidirectionality" is fundamental to HaveNWant, and affects how each bit of information interact with its neighbors in the large layered network of the brain. It describes links where for every output, there is a paired input. This frees an end from knowing anything about the other end, except the value it gives back. This greatly simplifies the network. The two function together and are sometimes called co-variables. (I sometimes think of them as complex numbers, where the imaginary covariable goes backwards.)  For example the pair (voltage, current) in the wire of an electronic circuit. When Micro Birectional networks are organized as Directed Acyclic Graphs, they can compute many required functions, like Baesian inference, Q Learning, Distributed Associative Memories, Language Serialization, Subsumption Hierarchies, and more. Such networks automatically generate retrograde functionality, like symbol reification or backpropigation.


The repeated form is called a Factal, a mix of Fractal and Fact. A Factal contains all of the mechanisms needed to support a particular binary predicate in all required cognitive processes. Factals are connected mostly to nearby Factals (as are nerves or gears). The research seeks to build up larger and larger machines, with greater and greater capabilities.  The hierarchically compose-able schema it produces creates associations that are the basis for remembering and imagining.


We hope to grow from a simple animal brain to human-like capability using a multi-layered version of Barr's Global Theater. In HaveNWant, it's called the Reenactment Simulator, the first design of which is the Mazlovian Multiplexor. Learning proceeds by build models separately in small domains, and then using them in global simulation. From just a minimal structure, local accretion learning rules and the experience of living in an environment. Separate learning areas for language recognition and generation couple the thinking of many brain's simulators.


The first phase is to define appropriate mechanisms, the second step is to refine these mechanisms with computer simulation, and the third step is to locate the sweet spots from which the emergent properties of animal thought arise. 

Publications:

The Factal Workbench is the results of generations of refinements, as the design was reformulated over a decade or more. The entries below start with the latest most concise formulation of the mechanism.  Later entries give the big picture, leading to them.

1. 2016 Biological Inspired Cognitive Architectures 2016 

The simulation videos for the paper to be presented at BICA16 (July 16, 2016 in New York City) 

2. 2015 Download Factal Workbench 2.0

This  shows the operation of several HaveNWant machines, in a series of short videos made from screen shots of the latest release. Watch videos of the Factal Workbench in operation, and to gain an intuitive understanding of how the elements of various small machines operate together.

3. 2014 The HaveNWant Schemata

The HaveNWant Schemata describes how a particular set of one-bit parts can be connected with rules to perform some cognitive learning tasks described by Piaget. It is the description of a specific causal mechanism. Read more about HaveNWant Schemata, and see the presentation made at the Biologically Inspired Cognitive Architectures at MIT 11/14.

4. 2013 Application Areas

How might the HaveNWant Schema be used to do some higher level tasks. It contains 4 short papers, including one on Language Deserialization (how to implement language in HaveNWant Reenactment Simulator), and on Bindings (how to ground symbols in sensations).

5. 2013 Yaagil and Yiav

This describes my best guess for the “Equation of Intelligence", presented informally in back rooms at Artificial Generic Intelligence Conference (AGI-12) in Oxford, England.  It describes how to build control structures out of 1-bit hierarchical bidirectional forms, and lead to the HaveNWant paper. Read more about Yaagil and Yiav, and delve into various aspects of how they might be deployed.

6. 2012 The Grandmother Turtle Schemata

A snapshot description of a cognitive schema that lead to Yaagil and YIAV, including how it makes associations, and simulates the future. Read the Grandmother Turtle paper.

This is an earlier paper that was presented at the memorial conference of a long-time friend Ray Solomonoff. This was in November 2011in Melbourne. The paper was also presented at the 24th Australasian Joint Conference on Artificial Intelligence (although it is not in the program). Read and watch the Design of a Conscious Machine


8. 2011 IEEE Robotics Group Talk

The first public presentation of thes ideas, on Tuesday February 8.  It covers much of the early motivations. Read more about IEEE Robotics Group Talk

9. 2010 Simple Connectionist Mechanisms of Cognition

A 70-page pdf manuscript, which defines facts and how they act in a semantic network to form forward models in a Reenactment Simulator. Read more about Simple Connectionist Mechanisms of Cognition

10. 2009 Hebbian Arborization

Describes a theoretical method by which nerves may grow to perform Hebbian networks.
Growing sparse Hebbian networks. (2 pages) Read more about Hebbian Arborization

11. 2008 Earlier Works

Available upon request.

  • This is a huge effort, and anybody who wants to learn more or help is encouraged to do so. Contact me if you'd like to discuss anything.