**Classes**: Thursdays 19:15-20:30, West Hall 8 (new room again!)

**Contents**. In this year's instantiation of this course, we will approach the world of computers and computing from two sides, roughly devoting half of the course to each of the two approaches. The first part will be about **bits**. You know that computers are "digital", which means that they do all their internal magic by handling 0's and 1's -- physically switching transistor states between two voltages. Question: How can a computer display videos, access the web, edit Word documents or carry out complex simulations of folding proteins just by juggling 0's and 1's? The answer will be given in the first part of this lecture, namely by an introduction to **Boolean logic**. Boolean logic is a quite accessible branch of mathematics which is exactly about how 0's and 1's can be combined into arbitrarily complex, meaningful compounds, and how a physical machinery can be designed (a "Boolean circuit") which can "compute" with such compounds. -- You surely have noticed that sometimes your computer takes very long to respond. Often this is just a bug: somebody did something stupid when he/she was coding the program that is taking so long. But some computational tasks are inherently slow - no fast computer programs exist for it. Because it is of great importance for program designers to find out whether they are stupid or whether there is just no way to speed up the program, an entire branch of theoretical computer science has become devoted to analyse the inherent complexity of computational tasks. This is **complexity theory**, and we will cast a brief look on it. It turns out that Boolean logic, again, is a key to understand some riddles of computational complexity. -- The second part of the lecture is about one of the currently hottest application areas of computers, **machine learning**. We will focus on the task of **image recognition** as a model example of machine learning applications. Question: how can a computer program **learn** to recognize images? Answer: by using a technique called **neural networks**. Neural networks can be regarded as super-abstract computer simulations of brain tissue, and like a biological brain an artificial neural network can be "trained" to solve a given task. We will learn about the structure of such image-recognizing networks and the procedures by which they can be trained. Interestingly, artificial neural networks can fall prey to the same catches as human learners: they may learn their stuff in a stupid way, just memorizing by heart without understanding, and then later fail when they can't transfer what they have mechanically memorized to new situations. Analyzing neural network training, we will understand what it means to understand...

**Prerequisites / self-check**. This course addresses the general Jacobs student audience and there are no formal prerequisites. This said, you should be aware that computer science roots in maths. Mathematical formalism will be kept to the absolute possible minimum and any math tools beyond high school mathematics will be carefully introduced. However, some degree of abstract formal thinking will be necessary. Quick self-test: if you can make easygoing sense of the sentence, *"a function from a set A to a set B assigns to each element of A an element of B"*, then this course will be fine for you. If in doubt, take a look at the lecture notes (links below).

**Lecture notes. **A fully self-contained set of online lecture notes is available (part 1, part 2).

**Teaching assistant** for this course is Jinbo Zhang (jin dot zhang at jacobsyouknowit... ), - he will try to answer any question that you ask him!

**Grading. **The course grade is assembled from classroom presence (10%, checked by signsheets), homeworks (30%), a midterm and a final exam (30% each). An exam missed due to illness can be made up provided I am notified by the student on the day of the exam at the latest and I receive a medical excuse through the registrar.

**Schedule (this will be filled in synchrony with reality as we go along)**

Sep 1 |
Introduction |

Sep 8 | Boolean variables, formulas and functions. Truth tables. Syntax of Boolean expressions. Exercise sheet 1 |

Sep 15 | Interpretations, semantics of B.E. Induction arguments in general and Blue Eyes in particular. Exercise sheet 2 |

Sep 22 | Tautologies, contradictions, satisfiability. No exercises this week |

Sep 29 | Conjunctive normal form. Knowledge bases Solution to exercise sheeet 2 Exercise sheet 3 |

Oct 6 | The core of "good old-fashioned AI": Proving theorems by checking satisfiability Exercise sheet 4 |

Oct 13 | A curious look at P =? NP. |

Oct 20 | Midterm (30 min, in class) A taste of deep learning |

Oct 27 | Some applications of ML (engineering flavored tasks) |

Nov 3 | Basics of "modeling" - curse of dimensionality |

Nov 10 | The perceptron and its learning rule. A glimpse at what it means to "walk". |

Nov 17 | Multilayer perceptron: architecture and general function approximation |

Nov 24 | MLP: training by gradient descent |

Dec 1 | Final exam (30 min, in class) |

Dec 20 |