Veranstaltung

LV-Nummer 3236 L 10197
Gesamt-Lehrleistung 88,00 UE
Semester WS 2019/20
Veranstaltungsformat LV / Vorlesung
Gruppe Termingruppe 0
Organisationseinheiten Technische Universität Berlin
Fakultät II
↳     Institut für Mathematik
↳         32363000 FG Funktionalanalysis
URLs
Label
Ansprechpartner*innen
Raslan, Mones
Verantwortliche
Sprache Deutsch

Termine (3)


Di. 15.10.19 - 11.02.20, wöchentlich, 08:00 - 10:00

Charlottenburg
,
MA 043

32363000 FG Funktionalanalysis , Technische Universität Berlin

48,00 UE
Einzeltermine ausklappen

Do. 24.10 - 14.11.19, wöchentlich, 10:00 - 12:00

Charlottenburg
,
MA 004

32363000 FG Funktionalanalysis , Technische Universität Berlin

10,67 UE
Einzeltermine ausklappen

Do. 28.11.19 - 06.02.20, wöchentlich, 10:00 - 12:00

Charlottenburg
,
MA 004

32363000 FG Funktionalanalysis , Technische Universität Berlin

29,33 UE
Einzeltermine ausklappen
Legende
08:00
09:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
Mo.
Di.
Mathematics of Deep Learning
Termingruppe 0
Charlottenburg, MA 043
Kutyniok, Gitta
Mi.
Do.
Mathematics of Deep Learning
Termingruppe 0
Charlottenburg, MA 004
Kutyniok, Gitta
Mathematics of Deep Learning
Termingruppe 0
Charlottenburg, MA 004
Kutyniok, Gitta
Fr.
Kalender als PDF exportieren

Bemerkung

During the course a manuscript will be developed.

Inhalt

The tremendous success of deep learning (also often coined artificial intelligence) in areas such as speech recognition systems available on each smart phone these days, self-driving cars, health care, or even for prescreening and selecting job applications can be witnessed every day in the media. In fact, currently, deep neural networks are entering almost every area of public life at an unprecedented rate. But despite the outstanding  success of deep neural networks in real-world applications, deep learning is still to some extent a black box, and even sometimes shows erratic behavior (so-called "adversarial examples").

From a mathematics viewpoint, deep neural networks are actually a purely mathematical object. And in fact, recently several very exciting mathematical approaches and results for a theoretical understanding of deep learning have been derived.

But also basically every areas of "applied" mathematics such as partial differential equations, inverse problems, etc. are currently embracing methods based on deep neural networks, leading often quite immediately to state-of-the-art approaches. Thus, it is fair to say that deep learning has already established itself as one, or maybe even the, general key approach in mathematics.

This course shall give an introduction into this exciting area and survey several results. It will cover the following topics:

  • Theory of deep learning, in particular, expressivity, learning, generalization, and interpretability (including adversarial examples)
  • Application and theory of deep learning for inverse problems
  • Application and theory of deep learning for partial differential equations