Observable operator models (OOMs) are mathematical models of stochastic systems. They have a greater expressive power than Hidden Markov Models (HMMs). HMMs are currently widely used e.g. in biological sequence modeling, engineering or speech processing, where one wishes to model stochastic systems that have memory or other context effects.
OOMs, although superficially similar to HMMs, spring from a very different mathematical idea. While usually stochastic time series are mathematically modeled as a trajectory in some state space - where observations correspond to locations in that space -, OOMs conceive stochastic trajectories as a sequence of operations, i.e., observations correspond 1-1 to mathematical actions. Hence the name, "observable operator models". It turns out that every stochastic system can be modeled with linear observable operators. This linearity results in a transparent general theory of stochastic systems. In turn, this theory gives rise to learning algorithms which outperform current HMM learning techniques both in speed and model quality.
ICML 04 workshop on predictive state representations
Suggested basic reading: H. Jaeger, Observable Operator Models for discrete stochastic time series. Neural Computation 12 (6), 2000, 1371-1398 (draft version, pdf)
A 20-page tutorial plus a 20-page description of the "Efficiency Sharpening" learning algorithm: H. Jaeger, M. Zhao, K. Kretzschmar, T. Oberstein, D. Popovici, A. Kolling (2006): Learning observable operator models via the ES algorithm. In: S. Haykin, J. Principe, T. Sejnowski, J. McWhirter (eds.), New Directions in Statistical Signal Processing: from Systems to Brain. MIT Press, Cambridge, MA., 417-464 (draft version, pdf)
A rather comprehensive set of tutorial slides (pdf).