Conceptors for ruling nonlinear (neural) dynamics

Motivation and idea

In biological brains “higher” cognitive control modules regulate “lower” brain layers in many ways. Examples include

  • attentional focussing,
  • predictive priming,
  • context setting,
  • or modulating motor output.

Not much is known about computational mechanisms which implement such top-down governance functions on the neural level, and such mechanisms are rarely realized in artificial neural learning architectures.

Conceptors present a computational principle, developed at MINDS since 2014, which allows higher neural modules to control lower ones in a dynamical, online-adaptive fashion. The core idea is to constrain the geometry of the lower-level system’s dynamics to exactly those regions in its high-dimensional neural state space which are relevant in the current context. Conceptors can be learnt with very elementary and biologically plausible learning rules.

Figure: conceptors at work

Conceptors at work: snapshots from a movie generated from a conceptor-controlled human motion pattern learning system. A single recurrent neural network learnt to re-generate a variety of 61-DoF motion patterns (walking and jogging, dancing, crawling, getting seated and standing up, boxing, gymnastics). The network was controlled top-down by activating and morphing a sequence of 15 conceptors, each of which represented one of the learnt prototype patterns.

 

Current conceptor research at MINDS
  • Lifelong learning in neural networks. A notorious problem for artificial neural networks is “catastrophic forgetting”: When a network is trained first on task A, then on B, the adaptations induced by B are apt to destructively over-write A. Conceptors are being exploited by Xu He for continually training deep neural networks on long task sequences A, B, C, …, surpassing continual learning methods from current research in deep learning (ICLR 2017 paper)

  • Stabilizing low-precision neuromorphic hardware. Recurrent neural networks implemented on analog, spiking, memristor-based microchips severely suffer from neural noise and low-precision parameters (often less than 1 bit). In the context of the European NeuRAM3 project (“Neural computing architectures in advanced monolithic 3D-VLSI nano-technologies”), conceptors are employed to dynamically stabilize neural dynamics in the presence of such perturbations.

  • Dendritic processing. In a collaboration with neuroscientist Walter Senn (Univ. Bern, CH), conceptor-based models of dendritic plasticity for learning context sensitive neural response profiles are being investigated.

  • Neuro-symbolic integration. Conceptors can be combined into new conceptors by AND, OR, and NOT operations. This has led to a collaboration with logician and AI researcher Till Mossakowski (Univ. Magdeburg), conceptors-based logic formalisms are developed which provide an insightful link between the “symbolic-logical” and the “subsymbolic-dynamical” descriptions of cognitive dynamics (first results in AIST 2018 conference paper by Till and Razvan Diaconescu)

Papers and code

H. Jaeger (2014): Controlling Recurrent Neural Networks by Conceptors. Jacobs University Technical Report Nr 31 (200 pages) (arXiv, Matlab code) Long, comprehensive, mathy.

Jaeger, H. (2017): Using Conceptors to Manage Neural Long-Term Memories for Temporal Patterns. Journal of Machine Learning Research 18, 1-43 (pdf, Matlab code, 10 MB, including data)

Historical note

After naming conceptors “conceptors” I found that this word had been used as a name for an IBM-internal research project led by Nathaniel Rochester in 1955, where versions of Hebb cell assemblies were simulated on early IBM computers. A scientific publication summarizing the findings: Rochester, N, et al. “Tests on a cell assembly theory of the action of the brain, using a large digital computer.” IRE Transactions on Information Theory, 2 (3) (1956): 80-93. The project name “CONCEPTOR” is not mentioned in this paper. I could not spot other information on the internet. The scientific objectives of this historical IBM project and MINDS conceptor research are unrelated, except that both happen to be based on recurrent neural networks.

extra.html