Paper: A Brief Survey of Deep Reinforcement Learning
Authors: Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath
Submitted: Submitted on 19 Aug 2017
Read the PDF
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
Here is a list of important Deep Learning related papers. Some fundamental Papers can be found here.
It is an awesome age we live in where the knowledge you need for tomorrow is available for free for everyone (with a computer, and an internet connection). There is more to learn than there is time to learn it in. We all can become experts in our fields. You must, however find places and situations to put your knowledge into practice so that it will not wane away. I think it is awesome that Nvidia has a learning institute with free courses to help you learn cutting edge stuff. By learning from a company focused on the advancing of the field, and who actually has only to gain from us learning us more, will keep you on the frontiers of the field.
https://github.com/vahidk/EffectiveTensorflow Attempts to demystify Tensorflow and provide some guidelines and best practices for more effective use of Tensorflow.
Two years ago Karpathy wrote a great article about Recurrent Nerual Networks.
Here is the link, enjoy: The Unreasonable Effectiveness of Recurrent Neural Networks
Andre Ng announced that he has launched five new courses in Deep Learning on Coursera.
The courses range from 2-4 weeks of study per course where you put in 3-6 hours of study per week per course.
- Neural Networks and Deep Learning
- Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
- Structuring Machine Learning Projects
- Convolutional Neural Networks
- Sequence Models
The courses will earn you a certificate and are described as follows:
In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.
Here is a Quora thread on some of the resources you can use to get Deep Learning skills.
If you do not know what tasks to start to tackle you can try one of these https://openai.com/requests-for-research/, enter a kaggle competition or participate in http://course.fast.ai and do the assignments there.
Fast.ai has released a second part of their free Deep Learning course for Coders.
find it here: http://course.fast.ai/part2.html
If Part 1 enables you to keep up with the state of the art in deep learning, Part 2 takes you to the bleeding edge of the field.
This is in my opinion the best free course on getting into the state of the art in deep learning. It is a site that offers a free 7 week learning experience for deep learning. taught by 2 year in a row Kaggle winner, entrepreneur and generally nice guy Jeremy Howard and Math PhD/Data scientist/Full stack developer/Forbes Featured Rachel Thomas, two amazing people in AI. Their approach to teaching Deep Learning for Coders is that it shall be accessible to as many people as possible and not to a selected few. So instead of abstract mathy lectures, they allow you to get your hands dirty from the first lecture and improve your intuition of the field, thus enabling you to create state of the art deep learning solutions from day one.
After starting the course, i immediately realized that these are very talented educators that are sincere about their goal to make AI accessible to everyone, and to make it benefit others. What i especially like about the course is the way they approach the topic pedagogically. Their method is inspired by the book “Making Learning Whole: How Seven Principles of Teaching can Transform Education” by author David Perkins. Perkins compares todays education with learning baseball:
If you would learn baseball the way that math is taught, you would first learn about the shape of a parabola, and then you would learn the material science behind the stitchings in baseballs and so forth. And twenty years later after you have completed your PhD and post-doc, you would be taken to your first baseball game and you would be introduced to the rules of baseball. And then 10 years later you might get to hit. The way that in practice baseball is taught is we take the kid down to the baseball diamond and we say “These people are playing baseball, would you like to play?” And they say, “Yeah! Sure I would”. “Perfect, stand here, i’m gonna throw this. Hit it. Ok, great, now run. Good you’re playing baseball”
That is why the first class of the course they demonstrate that here are 7 lines of code that you can use to perform state of the art image classification using deep learning. And to do any image classification you want as long as you structure it the right way. You may not understand most of it, but as you need to adapt the tasks to your needs, you will need to learn more details, and thus you learn.
The course consists of a 2 hour lecture each week, detailed lecture notes, a community contributed wiki and jupyter notebooks which you also will do your assignments in. (There is also setup instructions for getting a GPU equipped machine up and running on AWS) In The first weeks assigment you will submit an entry into the Kaggle competition for classifying cat and dog images. By taking advantage of what you learn you will outperform what was the state of the art when the competition was launched 2013.
This course is held one of the contributors to Keras and Tensoflow.
Here you go: