Here you go: https://www.scipy-lectures.org/
Github user andri27-ts has put together materail for learning Deep Reinforcement Learning in 60 days. If you find DeepMinds breakthroughs with thyr AlphaGo Zero and OpenAI’s Dota 2 facinating and want to learn how they work, the repository offers resources and project suggestions.
Here is a nice collection of Deep Learning resources including tutorials, papers and courses. Enjoy:
Last year the 2017 course of fast.ai was amazing, which taught state of the art deep learning to coders. There are so many goodies in the blog post about the Fast.ai 2018 launch which is available now. This year they held the course using Pytorch instead of Keras and wrote their own library for speeding up development and were the first to add several implementations from papers to the library such as Learning Rate Finder (Smith 2015) and Stochastic Gradient Descent with Restarts (SGDR). With one line of code, you can also get the images that the classifier gets wrong.
17 of the 20 top participants in a kaggle competitors were students in the preview course.
I recommend reading the blog post and taking the course.
There is a new kid on the block in terms of online courses on Deep Learning.
DeepSchool.io is a set of Jupyter notebooks that teach you the basics and different concepts you need in order to get started and being productive in Depp Learning. They are also videos supporting the notebook, although not for every notebook yet.
It differs from fast.ai in that the videos are shorter and the notebooks are mostly self-explanatory.
The goal of the project is to make Deep Learning accessible to everyone, make it practical, make learning open source and fun.
These are the topics covered:
- Lesson 0: Introduction to regression.
- Lesson 1: Penalising weights to fit better (scikit learn intro)
- Lesson 2: Gradient Descent. Using basic optimisation methods.
- Lesson 3: Tensorflow intro: zero layer hidden networks (i.e. normal regression).
- Lesson 4: Tensorflow hidden layer introduction.
- Lesson 5: Using Keras to simplify multi-layer neural nets.
- Lesson 6: Embeddings to deal with categorical data. (Keras)
- Lesson 7: Word2Vec. Embeddings to visualise words. (Tensorflow)
- Lesson 8: Application – Bike Sharing predictions
- Lesson 9: Choosing Number of Layers and more
- Lesson 10: XGBoost – A quick detour from Deep Learning
- Lesson 11: Convolutional Neural Nets (MNIST dataset)
- Lesson 12: CNNs and BatchNormalisation (CIFAR10 dataset)
- Lesson 13: Transfer Learning (Dogs vs Cats dataset)
- Lesson 14: LSTMs – Sentiment analysis.
- Lesson 15: LSTMs – Shakespeare.
- Lesson 16: LSTMs – Trump Tweets.
- Lesson 17: Trump – Stacking and Stateful LSTMs.
- Lesson 18: Fake News Classifier
You can read more here.
Here is a good tutorial explaining how convolutions work:
Here is an example of a convolution with half (one) padding and stride 2
In two days i was able to listen through half of cs231n in my spare time by listening on the youtube videos with higher than normal speed.
Nowadays i always listen to youtube videos with 2x or 3x speed.
With normal settings you can st the speed up to 2x. If you want to get the video faster than that you need to add a plugin or bookmarklet to achieve that.
You can drag these liks to your bookmarks bar, and get them as speed buttons to adjust the speed of your youtube videos…
x1 x2 x2.5 x3 x3.25 x3.5 x4
Also, check out this video on how to learn advanced concepts fast:
This Hacker News thread discusses why and what kind of maths you will need if you pursue AI/Machine learning.
Here is a short summary, and i tend to agree. These where mandatory maths courses when i studied CS :
You need to have a solid foundation in:
Good to know:
- Graph theory or Discrete math. (no course on khan academy for that, but on great courses, which isn’t free)
Here are some books:
- “Information Theory, Inference and Learning Algorithms” by David MacKaye.
- “Probability Theory: the Logic of Science” by E. T. Jaynes.
- “Elements of Statistical Learning” by Tibshirani is also good.
- “Bayesian Data Analysis” by Andrew Gelman is another great read.
- “Deep Learning” by Ian Goodfellow and Yoshua Bengio
I like the following quote motivating why you for instance will need calculus:
Calculus essentially discusses how things change smoothly and it has a very nice mechanism for talking about smooth changes algebraically.
A system which is at an optimum will, at that exact point, be no longer increasing or decreasing: a metal sheet balanced at the peak of a hill rests flat.
Many problems in ML are optimization problems: given some set of constraints, what choices of unknown parameters minimizes error? This can be very hard (NP-hard) in general, but if you design your situation to be “smooth” then you can use calculus and its very nice set of algebraic solutions. – Commend by used Tel
It could bee very motivating for students when they first start with calculus, linear algebra and statistics if they have an idea in what fields they can practically use them later on.
It is an awesome age we live in where the knowledge you need for tomorrow is available for free for everyone (with a computer, and an internet connection). There is more to learn than there is time to learn it in. We all can become experts in our fields. You must, however find places and situations to put your knowledge into practice so that it will not wane away. I think it is awesome that Nvidia has a learning institute with free courses to help you learn cutting edge stuff. By learning from a company focused on the advancing of the field, and who actually has only to gain from us learning us more, will keep you on the frontiers of the field.
You need to put in 20% of your learning time into math in order to get great at machine learning. Linear algebra and statistics are two very imprtant topics to cover. Here is a fast.ai course on Computational Linear Algebra taught in a different way. It is very hands on. You will be programming.