You are here

MINDS - Research Overview

Objective: multi-functional, dynamical learning systems

Artificial machine learning systems have recently been dramatically boosted in performance ("deep learning" approaches) and have become key enablers for large-scale information systems ("big data"). However, the human brain still by far surpasses artificial systems in core qualities:

  • Dynamics: natural brains process broadband sensor streams in real time, and continuously generate rich thought and motor output (including speech). In contrast, today's most celebrated machine learning systems process static input patterns, typically images, and the result of processing is again static, namely classification labels.
  • Multi-functionality: A human brain (paired with a body) can do so many things: see, hear, feel, walk, grasp, dance, flee, fight, feel, think, speak, sing .... But research in machine learning is mostly concerned with optimizing single-purpose algorithms, for instance for pattern recognition or time series prediction.

Research in MINDS strives to advance the field of machine learning toward dynamically embedded, multi-functional learning systems -- paving the ground for versatile and dynamical learning systems which will be the next step after "big" (but static) data.

Lines of research

Dynamical and multifunctional learning systems are naturally designed as integrated architectures made from a multitude of adaptive subsystems. The MINDS group researches novel learning algorithms and integration mechanisms for modular cognitive architectures. An important criterion is that learning algorithms be "vital": computationally cheap and converging fast; statistically efficient; and robust against noise and parameter variations. In this spirit, research at MINDS unfolds in three main lines:

Conceptors. The dynamics of neural modules in cognitive architectures needs to be externally controlled for many purposes: e.g. for modulating motor patterns; for focussing attention; for adjusting sensor processing to changing environmental conditions; for loading or recalling working memory items; and many others. A recent discovery in MINDS has established a general, robust, and simple neural mechanism, called conceptors, which can be invoked to serve all of these functions. Furthermore, conceptors obey a number of mathematical laws which are closely related to conceptual knowledge representation formalisms known from logic-based Artificial Intelligence. This reveals that "subsymbolic" neural dynamics and "symbolic" cognitive processes can be regarded as two sides of the same coin. Read more...

Reservoir computing. Recurrent neural networks (RNNs) are neural networks that incorporate cyclic feedback pathways. This makes them universal models of dynamical systems: in principle, every dynamical signal processing functionality (even when it relies on memory) can be trained into an RNN. Biological brains as a whole, and all their subsystems are RNNs. However, early learning procedures for RNNs were neither accurate nor robust enough to sustain practical applications. This situation has changed since about a decade. One of the main innovations was the advent of "reservoir computing" techniques, which were pioneered by MINDS researchers in the form of Echo State Networks. Read more...

Observable operator models.  Stochastic time series, such as speech signals or texts, are traditionally learnt using the formalism of hidden Markov models (HMMs). Learning algorithms for HMMs are computationally expensive. An alternative approach, called called observable operator models (OOMs), was developed and explored by MINDS researchers. The OOM formalism is based on modelling stochastic processes by sequences of linear operators and is in some respects similar to the formalism of quantum mechanics. OOMs give rise to a novel class of learning algorithms which are computationally much cheaper than current HMM algorithms, while at the same time yielding more accurate results. Read more...