You are here

Machine Learning Combo Spring 2015

Machine Learning, Lecture (320643), Lab (320641), Project (320544)

Jacobs University Bremen, Spring 2015, Herbert Jaeger

This is a dual undergraduate and graduate course. Undergraduates take the lecture only. Graduate students also do the Lab + Project combo in addition.

Class Sessions

Lecture: Mondays 11:15 and Wednesdays 14:15, West Hall 4

Lab and Project: to be agreed with participants

Course topics. This is a continuation of the Fall lecture. While in Fall we concentrated on supervised learning, now the emphasis will be on unsupervised learning methods. Generally speaking, in unsupervised learning scenarios the learner just gets a heap of (unlabelled) data and has to "discover" structure / concepts / interpretations / regularities in the data. A good way to understand the nature of unsupervised learning is to see it as finding a way to compress the information in the input data: compression is possible to the extent that redundancies = regularities have been detected. A second good way to look at unsupervised learning is to see it as the task of finding compact descriptions of probability distributions. That may sound technical, but here is another good way to look at unsupervised learning: look into a mirror! Humans perform unsupervised learning most of the time (because we learn always, but only rarely is a teacher present who "labels" our current sensor input). Methods in unsupervised learning are typically "statistics flavored" and are often related to methods from statistical physics.  The lecture introduces fundamental concepts and illustrates them with a choice of elementary model formalisms (mixtures of Gaussians, Parzen windows, sampling methods, Hopfield networks, Boltzmann machines and their derivates). The basic format of the lab (for graduate students) is two miniprojects, each taking 4-5 weeks. Students will get a challenging dataset and a modelling task. The modelling task can be solved by the elementary methods provided in the lecture -- but only poorly. Students are expected to explore more advanced and powerful methods on their own initiative (hints will be given). The project is a highly self-steered project comparable to a BSc thesis. Topics come from ongoing research in my group and are agreed on a case-by-case basis. 

Lecture notes. A set of self-contained LNs will be supplied. Since I teach this course in this format for the first time, the LNs will grow in instalments throughout the semester. Many parts will be adapted from my former graduate lecture notes on "Algorithmical and Statistical Modelling"

Grading and exams. For the lecture, the course grade is computed from classroom participation (5%), homeworks (35%), midterm (25%) and final exam (35%). The lab grading is based on two miniproject reports (50% each). The project grade is based on the project proposal (30%) and the final report (70%).

Helpful materials:

Slides of a Neural Network Course (23 MB) given at the "Interdisciplinary College" 2008

A condensed primer on measure theory and probability theory, by Manjunath Gandhi 

An online textbook on probability theory (by Rick Durrett)

Hints for writing good miniproject reports, or rather, for avoiding typical blunders

For your exam preparation: solved exams from the somewhat similar old lecture on "Algorithmical and Statistical Modeling": final exam 2010, final exam 2007, final exam 2005, midterm 2010, midterm 2010 probability theory part, midterm 2007

References

The online lecture notes are self-contained, and no further literature is necessary for this course. However, if you want to study some topics in more depth, the following are recommended references.

Bishop, Christopher M.: Pattern Recognition and Machine Learning. Springer Verlag, 2006 Quite thick...  (730 pages) -- more like a handbook for practicians.

Michie, D., Spiegelhalter, D.J., Taylor, C.C.: Machine Learning, Neural and Statistical Classification (1994) Free and online at http://www.amsta.leeds.ac.uk/~charles/statlog/ and at the course resource repository. A transparently written book, concentrating on classification. Good backup reading. Thanks to Mantas for pointing this out!

T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Verlag 2001. IRC:  Q325.75 .H37 2001 I have found this book only recently and haven't studied it in detail – looks extremely well written, combining (statistical) maths with applications and principal methods of machine learning, full of illuminating color graphics. May become my favourite.