You are here

Conceptors

In biological brains "higher" cognitive control modules regulate "lower" brain layers in many ways. Examples for such top-down processing pathways include triggering motion commands ("reach for that cup"), setting attentional focus ("look closer... there!"), or predicting the next sensory impressions ("oops - that will hit me"). Not much is known about computational mechanisms which would implement such top-down governance functions on the neural level. As a consequence, in machine learning systems which are based on artificial neural networks, top-down regulation is rarely implemented. Specifically, today's top-performing pattern recognition systems ("deep learning" architectures) do not exploit top-down regulation pathways.

The most recent research line in the MINDS group addresses such top-down governance mechanisms in modular, neural learning architectures. We discovered a computational principle, called conceptors, which allows higher neural modules to control lower ones in a dynamical, online-adaptive fashion. The conceptor mechanism lends itself to numerous purposes:

  • A single neural network can learn a large number of different dynamical patterns (e.g. words, or motions).
  • After some patterns have been learnt by a neural network, it can re-generate not only the learnt "prototypes" but a large collection of morphed, combined, or abstracted patterns.
  • Patterns learnt by a neural network can become logically combined with operations AND, OR, NOT subject to rules of Boolean logic. This reveals a fundamental link between the worlds of "subsymbolic" neural dynamics and of "symbolic"  cognitive operations.
  • This intimate connection between the worlds of neural dynamics and logical-symbolic operations yields novel algorithms and architectures for lifelong learning, signal filtering, attending to particular signal sources ("party talk" effect), and more.

Expressed in a nutshell, conceptors enable "full top-down logico-conceptual control" of the nonlinear, pattern-generating dynamics of recurrent neural networks. Thanks to its robustness, simplicity, computational efficiency and versatility, we perceive the conceptor mechanism as a key for designing flexibly multifunctional neural learning architectures, which will become crucial for future human-computer interaction systems and robots.

The figure shows snapshots from a movie (download in mp4 and  mov format, view on youtube) generated from a conceptor-controlled human motion pattern learning system. A recurrent neural network learnt to re-generate a variety of motion patterns (walking and jogging, dancing, crawling, getting seated and standing up, boxing, gymnastics). The patterns were each defined by joint angle trajectories with 61 degrees of freedom in total. For generating the video, the trained neural pattern generator was controlled "top-down" by activating and morphing a sequence of conceptors, each of which represented one of the learnt "prototype" patterns. (Credits: training patterns were distilled from human motion capture sequences obtained from the CMU mocap repository; mocap data processing and visualization was done using the mocap toolbox from the University of Jyväskylä.)

 

 


Resources

H. Jaeger (2014): Conceptors: an easy introduction. (arXiv) Short, informal, richly illustrated.

H. Jaeger (2014): Controlling Recurrent Neural Networks by Conceptors. Jacobs University technical report Nr 31 (195 pages)  (arXiv) (Matlab code) Long, detailed, mathy. The first 20 pages provide a self-contained survey.


The word "conceptor" -- historical note

After naming conceptors "conceptors" I found that this word had been used as a name for an IBM-internal research project led by Nathaniel Rochester in 1955, where versions of Hebb cell assemblies were simulated on early IBM computers. A scientific publication summarizing the findings: Rochester, N, et al. "Tests on a cell assembly theory of the action of the brain, using a large digital computer." IRE Transactions on Information Theory,  2 (3) (1956): 80-93. The project name "CONCEPTOR" is not mentioned in this paper. I could otherwise spot only very fragmentary notes on the project on the internet, essentially only here and here. The scientific objectives of this historical IBM project and the research indicated above are unrelated, except that both happen to be based on recurrent neural networks.