You are here

Echo State Networks (ESNs)

Recurrent neural networks (RNNs) have a great potential for "black box" modeling of nonlinear dynamical systems. However, this potential was not being exploited because simple and powerful training algorithms to learn RNNs from data have been missing.

The "echo state" approach looks at RNNs from a new angle. Large RNNs are interpreted as "reservoirs" of complex, excitable dynamics. Output units "tap" from this reservoir by linearly combining the desired output signal from the rich variety of excited reservoir signals. This idea leads to training algorithms where only the network-to-output connection weights have to be trained. This can be done with known, highly efficient linear regression algorithms.

Numerous dynamical systems have been learnt easily by echo state networks, which were difficult to learn with previous methods. They include (long) periodic sequence generators, multistable switches, tunable frequency generators, frequency measurement devices, controllers for nonlinear plants, long short-term memories, dynamical pattern recognizers, and notably, long-term predictors of chaotic attractors. Today ESNs are widely used in dynamical pattern recognition applications, control, and time series prediction applications.

The basic idea of having a dynamical "reservoir", from which target dynamics of interest are read out by trainable mechanisms, has been independently explored under the name of Liquid State Machines (LSM) by Wolfgang Maass et al. Their main research objective is modeling of biological systems; therefore, the LSM approach typically employs reservoir networks (called "liquids") made from more biologically adequate, spiking neuron models.

In the first decade of the new millenium, the field has solidified into a research area of its own, now most often referred to as Reservoir Computing. A web portal jointly maintained by the leading groups in this field gives links to tutorials, publications, groups, projects, and events.

The image shows predictions made by ESNs for (a.) the Mackey-Glass attractor, (b.) the Laser dataset, and (c.) the Lorenz attractor. Prediction starts at time 0, timescale counts ESN updates. Green: analytical solution; blue: ESN prediction. Prediction accuracy per time step is in the order of machine precision.

 

 

Patent note

The Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) holds international patents for the ESN method.

Starter Papers

A Scholarpedia article for a first impression.

Highlight paper: H. Jaeger and H. Haas, Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 304, 2 April 2004, pp. 78-80 (preprint pdf) (Matlab code zip)

Extensive survey paper: M. Lukoševičius and H. Jaeger (2009),  Reservoir Computing Approaches to Recurrent Neural Network Training. Computer Science Review 3(3), 127-149 (preprint pdf)

The 2009 PhD thesis by David Verstraeten (University Gent) provides an extensive and very accessible overview of the state of the art with particular emphasis of application-relevant insights - indispensable reading for serious end-users.

Toolboxes and didactic programming examples

On the Reservoir Computing web portal maintained at the University of Gent there is a collection of ESN / reservoir computing toolboxes for various programming environments. If you want to just start playing with ESNs, a collection of very plain (and hence, transparent) demo examples has been supplied by Mantas Lukosevicius (versions in Matlab, Python, R, and Oger).