HaveNWant Development Plan:

Human Cognition 

using Bidirectional Binary Predicates

(Printable Version)

The goal is to make human-like intelligence organically, as a fine grain schemata of reactive atoms. Each represents a bit of information that computes reactions in place. It may be possible to build these machines using tiny nano-tech molecular elements, small enough for a brain that seems human to fit in the palm of your hand. We are currently simulating brains built as attentional multiplexores built from over a thousand bits. A brain with average human-like intelligence might take about a million elements. Such scaling is well accommodated. HaveNWant has other applications, such as an inter-cortical communications protocol and as a reactive language model.


Greatest Values:

HaveNWant is a connectionist architecture designed to build a human-class AI from tiny interconnected computational molecules. It computes using bidirectional activation levels, and can unwind in space many common AI algorithms, with specialized elements for Bayes, Markov, Hamming, Q-Learning, Back Propagation and Attentional Networks. Extreme density and ultra low power are attained by using analog circuits to form an instantly reconfigurable analog computer. Very large distributed layered sensory networks like the brain’s can be grown from small seeds, with values added later in particular areas by experience. Individual models learn quickly because they are formed in small domains. They can also build on one another in layers and cooperate to track the state of the world, in a distributed realization of Barr’s Global Theater. HaveNWant offers a fundamentally different approach to parallelism and avoids the Von Neuman bottleneck of today’s AI.


Origin Story

”Intelligence is just a complex mathematical function that gets executed to cause you to think”. Marvin would tell us that. “Your job is to figure out what that function is!” My thesis proposal involved one aspect of this: “Structuring Problem Solvers Isomorphic to their problem statement”. It was submitted to Mike Dertouzos in 1970, MIT AI Lab, completing all requirements, except for a thesis. I was newly married with a world of curiosity to work on, to find a breakthrough to this project which would be worthy of a doctorate. 

Marvin Minsky was working on his “Society of Mind” at the time. He argued that complex human functionality could be deconstructed into simpler components that worked together. The problem was, it was not constructive. It said very little about how to build it. Mike challenged us to find a good set of parts, a good “basis”, to build brains out of. The atoms of this fine grain connectionist schemata should have strictly bounded complexity, so they could be implemented using molecular computing. Our basis might even do common housekeeping, making the whole job of AI easier. Marvin’s K-Lines theory was one take. Its atoms were binary predicates, each representing one bit of information about the world.  I heard that a guest speaker at MIT talked about an implementation that combined K-Lines and P-Lines. It used compositional functions, as Poggio Tomas of MIT now calls them, which were bidirectional. But they could not get it to work. 

I defined a schemata back in ‘71 with bidirectional elements which would settle to a locally best solution. The schema was constructed iteratively as the problem was digested, but I had only linear results. I spent almost a year searching through Lyapunov functions and the ubiquity of bidirectionality, to find a great thesis topic. But I became stymied and frustrated. Fearing a PhD might be a decade away, I left MIT with just an EE, to work at Mike’s company, Computek. There I solved many problems a day, not just one a month. At the time I left, MIT’s shrink asked me if I wouldn’t miss a career in Science. Instead I chose a fifty year career as a computer scientist and engineer in the greater Boston area. I designed calculators, CPU’s, Intelligent Terminals, multiprocessors, virtual realities for torpedoes and the like. But I kept searching for that basis, a good scalable schemata to express the wide shallow networks of the brain. Over the last fifty years, I have had over a half dozen projects, each building on the last. Finally now, over 50 years later, I’ve found my thesis topic!

To go back, Mainstream AI researchers moved away from fine grain schemats, and by the mid 1970's embraced large neural networks with back propagation, using CPUs and Graphic Processors to operate on large matrices. Its algorithms evolve without any notion of locality. I’ve been exploring something entirely different: It breaks the Von Neuman bottleneck by starting from the opposite direction, starting tiny HaveNWant atoms and building upward, conglomerating with precise form, a huge nonlinear analog computer. It thinks like a brain, moving quickly from state to state. It is a massive nonlinear analog computer that computes in parallel. It’s a study of how bits of information might self-organize, at the hardware level.

HaveNWant is a grandmother cell schemata, that holds all world knowledge as one bit hardware predicates called “factals”. They compute reactions using activation levels of limited accuracy (of say an analog computer, or 8 bit digital accuracy). As such a HaveNWant network resembles a gigantic analog computer that settles quickly, in a “blink”. Unlike K-Lines, HaveNWant predicates each have 2 scalar activation levels. I originally called them “have” and “want”, but then found other pairs. This doubling or grouping is natural, almost magical, and seems to lead to simpler networks with many interesting powers. Many algorithms can be unrolled to compute in parallel, each in a different part of the network. These include Back Propagation and Q-Learning of Deep Neural Networks, Bayesian probabilities, Hidden Markov Models, Excluded Middle logic, start/stop logic, and even physical systems with Hamiltonians. All of these computed by the same HaveNWant building blocks. The bonus is that things like reverse plants and reification are computed automatically, and learning is easily supported. It is designed to make Minsky’s Society, but from the ground up, out of bits.

This brain will learn the ways of the world first, and then exploit them to achieve goals. It computes with activation levels in a connectionist schemata. Learning involves appending and editing an existing network at the active nodes. We have demonstrated this with a HaveNWant simulator, models being learned in small domains, and then linking in global simulation. These models will be made to track the most sophisticated states, and simultaneously learning models for the “next most obvious thing”. This generates a hierarchical U-shaped command and control structure. It allows models to be built efficiently in layered hierarchies, one on top of the other, under strict rules. And we presume the bidirectional elements can be connected so all the modules in a hierarchy operate bidirectionally. This formulation subsumes back-propagation the return path calculations of reifications. It simplifies and underlies how HaveNWant networks operate.

The Maslovian Multiplexor is an attentional mechanism I am building. It is the latest attempt to build Minsky’s society using the HaveNWant Schemata. It implements Mazlov’s Hierarchy of Needs, by focusing the brain on the most important thing. It is a distributed command and control system. It is hierarchical and layered, with the world on bottom, growing models in the middle, and a situation/mood/goal at the top.  It grows models by incrementally learning the next most obvious unknown, adding elements to memorize it, and accumulating those models in layers. It enacts Barr’s Global Theater by reconstructing the world’s reality in the brain. We yet need an efficient implementation in some nanotech or crispr technology, so a humanoid brain might be built the size of your fist.



Development Plan

1. Creation of a technical advisory board to develop a plan. Reduce risk by including Nanotech expert, simulation support (programmer), mindshare outreach (librarian). Attracting a team that would be able to help actualize this program. 

2. A seed financing stream, to let me engage a few key people, on a fee for service basis. 

3. The first person I want is a full time technical editor, organizing concepts better. Update of website. Co-author papers, such as comparison with other cognitive architectures, to get mindshare.  I did it for Carvey at Avici. Going from saying everything everywhere, to determining a crisp way to explain things It would gather and sort the volumes of pablum I’ve generated and organize it. 

4. Write HaveNWant Requirements document, constraining implementations. Determine preferred implementation (Nanotech? Networks of EPLD-ish? crispr/cell?) A cool nanotech implementation would remove a lot of risk.

5. A person or two to help me develop FacialWorkbench.app, to support required simulations. Multi-perspective viewing of simulations in 3D aids debug of such large networks. Finalization and release.

6. Development of HaveNWant application areas. Creation of bodies, Videos documenting experiments:

a. Develop the Maslovian Multiplexor, conscious attention from bits. First instance of Proto Brain and the attentional protocol

b. Tivo remote driver, an interesting learning easy-case to explore. 

c. Dynamic grounding of memes in the Global Theater. Evolution of verbs and nouns. 

d. Emulation of other cognitive architectures with HaveNWant.


7. Grow mindshare:

a. Establishing a sharewhere presence

b. Show “there’s always a return path”

c. Join a team to embrace micro bidirectionality in system design!

d. Show “bidir bits” make “bidir structs”

e. Go to or hold a conference, Community Discussion (Zoom) 

f. Publish in peer-reviewed journal Write another BICA paper.

g. redesign webb site 

h. Develop liaison with Olin. (existing study or lab colaboration

i. A seat on a thesis review committee.


  • Potential players:

    • Systems Architects

    • Nanotech Experts, CRISPR engineers, toy designers,

    • Mac OS Developers

    • Cognitive Science Influencers

    • Cognitive scientists, to influence their research interests

    • Domain Experts, wanting a different perspective on AI

      • E.g. nano-tech or CRISPR engineers, toy designers,

      • WHAT WOULD THEY PERCEIVE DIFFERENT

    • Industrial people who are designing brains for robots, cars, phones, elevators. 

      • Incorporate in their products (source of energy).

      • Orders combined with self-learning. (e.g. Microsoft)

      • E.g. ERA

    • Coders, architects, and engineers Retired Engineers, looking for a new hobby.

      • Want to learn about my AI perspective

      • Can operate in swift / git environment, or willing to learn

        •  Swift user groups

        • Android or intelligent app work. 

    • Retired Engineers, Coders, and architects, looking for a new hobby. iPhone, 

      • Want to learn about my AI perspective

      • Can operate in swift / git environment, or willing to learn

        •  Swift user groups

        • Android or intelligent app work. 

    • Engineers for hire. (When I were to get money)

    • Gaming and Robotics Developers 


  • Support 

    • Recruit Volunteers

    • Develop Partnerships

    • Develop working development budget

    • Raise Phase One funding


Glossary:

  • accretion schemata -- a semantic network which learns by adding new elements.

  • actor -- a subnet which transforms evidence into a situation bidirectionally. It learns its state and inverts it with Q-Learning.

  • associative memory -- determines the closest known pattern by Hamming distance. In sparse spaces, it can hallucinate missing elements of this pattern, and predict the future. Memory may be distributed space.

  • A brain is hardware that controls a physical or virtual body.

  • baby brain -- initial network able to grow by adding values and restructuring models.

  • bi-directional factal predicate -- a binary truth about the world with two activation levels: we “have” it present in the world, and we “want” more of it. This have and want gives HaveNWant its name. The two activation levels, one traveling forward and the other backward, can be thought of as a complex number.

  • cognitive architecture -- the rules that hold the reactions of a humanoid brain.

  • compostable bidirectional atoms --  an extension of Paggio’s theory, to operate with bidirectional functions.

  • CRISPr technology -- edits DNA material in living cells, changing their function.

  • experiential learning -- how experiences are transformed into knowledge structures. HaveNWant has ways now: An associative memory determines the closest known pattern by Hamming distance. A serdes tree learns a language that it can repeat.

  • a factal represents a bit of information the brain knows about the world in its computations. It computes with two activation levels: a) we “have” evidence its predicate is present in the world, and b) we “want” it to be true.

  • Factal Workbench -- a MacOS Application embodying the HaveNWant concepts. It defines a network and simulates operation in an environment. It is used for exploring and evolving brain structures. This allows us to expand our possibilities of response. 

  • fine grain schemata -- a network whose pieces are small in size and function.

  • hierarchical models --  models which develop in cascaded layers.

  • incremental knowledge -- little subnetworks of atomic form. E.g: a meme.

  • layered networks -- use lower levels to produce and be driven by higher levels. In lower areas, layers are in separate cortical areas (e.g: V1, V2, …), in higher areas they may be colocated (e.g: association areas).

  • master slave networks -- is where lower levels involve local dynamic information, while higher levels have more slow-changing global command information

  • Mazlovian multiplexor -- an attentional structure that connects the most urgent model of sensory processing cortex to a small fast learner like the hippocampus.

  • micro bidirectionality -- all network communications are over links. Links exchange two activation levels, A variable is sent out, and it’s “co-variable” received back.It is by the interaction of these two that network conflicts are resolved.  Most all distributed AI algorithms I know, from Bayes, Markov, Hamilton and more can readily be computed by HaveNWant networks. An example is an electrical circuit, where as each node’s voltage radiates out, a current comes back in.

  • micro-organic brains: formed by a network made of many computational atoms that organize themselves to control and learn about the world.

  • a model  incorporates the essence of a phenomenon for consideration in thought.

  • nano technology affords the promise of building molecular machines whose atoms are precisely controlled, and have bidirectional communication links.

  • organic brains: formed by a network made of many computational predicates organized to control and learn about the world.

  • reactive semantic networks -- subnets which embody information with their connectivity, and react in accordance with that information. They are hierarchically defined and layered, so pixels are below lines, which are below areas and things.

  • serdes tree -- learn language sequences and repeat them.

  • A subsumption schemata is where masters override slaves.

  • scalable systems are capable of growing to millions to billions of elements.