The video lectures for Stanfords very popular CS231n (Convolutional Neural Networks for Visual Recognition) that was held in Spring 2017 was released this month. (According to their twitter page, the cs231n website gets over 10 000 views per day. The reading material on their page is really good at explaining CNNs)
Here are the video lectures:
These are the assignments for the course:
- Assignment #1: Image Classification, kNN, SVM, Softmax, Neural Network
- Assignment #2: Fully-Connected Nets, Batch Normalization, Dropout, Convolutional Nets
- Assignment #3: Image Captioning with Vanilla RNNs, Image Captioning with LSTMs, Network Visualization, Style Transfer, Generative Adversarial Networks
Also Make sure to check out last years student reports. note: one is about improving the state of the art of detecting the Higgs Boson.
This Hacker News thread discusses why and what kind of maths you will need if you pursue AI/Machine learning.
Here is a short summary, and i tend to agree. These where mandatory maths courses when i studied CS :
You need to have a solid foundation in:
Good to know:
- Graph theory or Discrete math. (no course on khan academy for that, but on great courses, which isn’t free)
Here are some books:
I like the following quote motivating why you for instance will need calculus:
Calculus essentially discusses how things change smoothly and it has a very nice mechanism for talking about smooth changes algebraically.
A system which is at an optimum will, at that exact point, be no longer increasing or decreasing: a metal sheet balanced at the peak of a hill rests flat.
Many problems in ML are optimization problems: given some set of constraints, what choices of unknown parameters minimizes error? This can be very hard (NP-hard) in general, but if you design your situation to be “smooth” then you can use calculus and its very nice set of algebraic solutions. – Commend by used Tel
It could bee very motivating for students when they first start with calculus, linear algebra and statistics if they have an idea in what fields they can practically use them later on.
Abstract: A simple way to improve classification performance is to average the predictions of a large ensemble of different classifiers. This is great for winning competitions but requires too much computation at test time for practical applications such as speech recognition. In a widely ignored paper in 2006, Caruana and his collaborators showed that the knowledge in the ensemble could be transferred to a single, efficient model by training the single model to mimic the log probabilities of the ensemble average. This technique works because most of the knowledge in the learned ensemble is in the relative probabilities of extremely improbable wrong answers. For example, the ensemble may give a BMW a probability of one in a billion of being a garbage truck but this is still far greater (in the log domain) than its probability of being a carrot. This “dark knowledge”, which is practically invisible in the class probabilities, defines a similarity metric over the classes that makes it much easier to learn a good classifier. I will describe a new variation of this technique called “distillation” and will show some surprising examples in which good classifiers over all of the classes can be learned from data in which some of the classes are entirely absent, provided the targets come from an ensemble that has been trained on all of the classes. I will also show how this technique can be used to improve a state-of-the-art acoustic model and will discuss its application to learning large sets of specialist models without overfitting. This is joint work with Oriol Vinyals and Jeff Dean.
Lecture notes: http://www.ttic.edu/dl/dark14.pdf
Paper: A Brief Survey of Deep Reinforcement Learning
Authors: Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath
Submitted: Submitted on 19 Aug 2017
Read the PDF
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
Here is a list of important Deep Learning related papers. Some fundamental Papers can be found here.
Regardless of your approach, running deep learning requires resources. One of the reason of it’s current success is that the two last parts of the three key components have emerged.
The three key components are universal algorithm, data and compute.
Compute is available, but at a cost.
There are three possibilities i am considering.
– buy or build your own GPU powered server, for about $1700.
– continue on AWS p2 for about a dollar an hour, supported with an old desktop computer with a decent computer (it is slightly slower than the p2, and runs out of memory occasionally)
– Follow this tutorial and get an AWS p2 as an EC2 Spot instance, where you bid for “unused” resources. According to the author of this post, the AWS bills are a tent of the one for the normal AWS p2.
The most fun alternative is to build your own machine, make it upgradable and don’t have to worry about forgetting to shut down the running cloud instance.
This free course by Udacity and NVIDIA teaches you how to get your mind around and do parallell programming with the GPU. You will use the CUDA programming environment (that you also use in deep learning) in order to use the GPU for your processing. You will have access to high-end GPU machines. Things you will learn centers around image processing.
It is an awesome age we live in where the knowledge you need for tomorrow is available for free for everyone (with a computer, and an internet connection). There is more to learn than there is time to learn it in. We all can become experts in our fields. You must, however find places and situations to put your knowledge into practice so that it will not wane away. I think it is awesome that Nvidia has a learning institute with free courses to help you learn cutting edge stuff. By learning from a company focused on the advancing of the field, and who actually has only to gain from us learning us more, will keep you on the frontiers of the field.
You need to put in 20% of your learning time into math in order to get great at machine learning. Linear algebra and statistics are two very imprtant topics to cover. Here is a fast.ai course on Computational Linear Algebra taught in a different way. It is very hands on. You will be programming.