Representation Learning
This reading assignment concludes the second part of the course which largely covered practical methodology, whereas the first part covered common deep neural network designs and architectures.
From the Deep Learning Book - Chapter 12: Applications, please read these sections:
12.1.4 Model Compression
12.3 Speech Recognition
(This provides a rather historical perspective on the developments so far.)
12.4–12.4.5 Natural Language Processing
with special emphasis on
12.4.3 (High-Dimensional Outputs) and
12.4.5.1 (Attention).
From the Deep Learning Book - Chapter 15: Representation Learning, please read these sections:
15.1 Greedy Layer-Wise Unsupervised Pretraining
(We covered some of this already in the autoencoder session.)
15.2 Transfer Learning and Domain Adaptation
For those of you interested in zero-shot learning, there is a recent blog post on Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System.
15.3 Semi-Supervised Disentangling of Causal Factors
15.4 Distributed Representation
15.6 Providing Clues to Discover Underlying Causes
with emphasis on 15.1, 15.2, and 15.6. Any sections not listed here are optional reading.
Finally, here is a demonstration of the state of the art in automatic Christmas carol generation and interpretation. I would say, there is quite some room for improvements.
Happy Holidays! See you all in 2017!