This Course helps you understand and practice on the most recent advances in Machine Learning techniques thanks to the neural networks:
- A lecture & a workshop on Neural Networks & Deep Learning:
Artificial Intelligence took inspiration from the functioning of the human brain, architecturing neurons together to form neural networks.
Based on such architecture, Deep Learning has recently gained an impressive momentum thanks to new optimization techniques and the massive amount of data generated by software, Internet and connected devices.
This course teaches you the technical principles of Neural Networks and Deep Learning and lets you practice on Keras, one of the most used Deep Learning frameworks.
- Tensorflow is the most used & complete Deep Learning framework, letting you build at scale neural networks. After a short introduction, you will discover
and practice on this powerful tool
- The Convolutional Neural Networks (CNN) are a set of recent techniques enabling to handle images, videos but also natural language, as input of neural networks and let
the neural networks consider the intrications the different input elements can have at different scale. It is so highly powerful that CNN have now widespread use in the
industry, especially in fields such as Computer Vision and Facial Recognition. After a short introduction lecture, you will be quickly practicing using CNN and AI framework Keras.
- The Recurrent Neural Networks (RNN) are embedding feedback loops inside the neural networks by letting outputs of neurons become inputs of neurons from upstream
layers, functioning as memories. LSTM (Long Short Term Memory) is one particular RNN that we will be studying. These techniques are at the forefront of the recent advances in
Natural Language Processing. Practice is also key to understanding how to leverage RNN in various situations and you will be using the most popular AI open source framework
Google Tensorflow.
- Auto-encoders are types of neural networks used to learn a representation (encoding) in an unsupervised manner, such as dimensionality reduction, by removing the signal “noise”. It also often includes a
reconstructing part to generate from the reduced encoding a representation as close as possible to its original input. Auto-encoders are more and more used in NLP tasks and in image and text generation.