Unity Machine Learning Agents. Super awesome, possibly terrifying

Unity has released a new SDK supporting machine learning agents in the Unity gaming engine. This enables you to:

  • Study complex multi agent behaviors in realistic competitive and cooperative scenarios. This is a lot safer than doing it with robots.
  • Study multi-agent behavior for industrial robotics, autonomous vehicles and other applications in a realistic environment.
  • Create intelligent agents for your games.

https://blogs.unity3d.com/2017/09/19/introducing-unity-machine-learning-agents/

The benefits of this is that it will be a lot easier to develop and test learning algorithms that can later be used in real life. There is also a potential danger. In the same way that we can test industry robots, autonomous vehicles etc that can then be ported into the real world. We will inevitably see very smart ai agents that drive opponents in realistic war games. These can also be ported into the real world.

Since deep reinforcement learning can beat any human player in any game, the more realistic the game gets, the scarier it gets to imagine what would happen if you plug such an ai into some fighter jets or autonomous tanks.

Pretend that you are teaching yourself from two weeks ago

This blog is not intended to draw crowds. In fact, this is my current visitor tsunami:

The purpose of this blog is to be my personal notebook. A tool for allowing myself to remember “what was that link to that page now again?”. Also, if you want to learn, you have to teach others. If you have no platform to teach from, you can blog. There has been a barrier for me to post stuff online, and that is that i think that it needs to be perfect. One tends to imagine a certain visitor group and what they will think if you write this or that, or if you don’t know something that should be obvious. Lower the bar I say. Imagine yourself from two weeks or a month ago and explain the stuff to him. He is all ears, and actually would benefit from the stuff you have to say. Also, he tends to like the things you like. So my recommendation to you, younger self, is to start putting your thoughts into text, and don’t be afraid what people will think.

Have a nice day.

Google brain AMA 2017 TL;DR

The google brain team did an AMA (Ask me anything) on Reddit. This is the tl;dr:

  • They think PyTorch (made by people at Facebook) is great and that they did a good job with it. And that it is good that many people make Machine Learning libraries. You also learn from each other when developing your library.
  • Some of the hurdles in machine learning is to make deep networks stable and that many of the new breakthroughs in ML such as GANs or DeepRL are still to have their ‘batch normalization’ moment (that one idea that makes everything work without having to fight it). Also moving away from supervised learning will be difficult. Another challenge is to make systems that solve many problems instead of one.
  • Geoffrey Hintons capsules are coming along fine. They have a paper in nips on it.
  • They talked about some failures and stuff that hadn’t worked.
  • Their work days involve a lot of reading papers.
  • They recommend using the highest level API that solves your problem, then you get best practices for free
  •  The line between AI engineer and research scientist is blurry.
  • Give researchers access to more computation power and they will accomplish more.
  • PhD scientists go through the same interview pipeline as all devs
  • Robotics will benefit from the fact that we now have perception
  • A good way to learn is to read papers and re-implement them. If you want to lear a variety of ML topics, pick papers that cover different topics such as image classification, language modeling, GANs etc. If you want to become an expert in one subfield, pick a bunch of related papers
  • People are excited about: efficient large-scale optimization, building a theoretical foundation for deep learning, Human/AI Interaction, bridging the gap between real world and simulation, imitation learning, generatin long structured documents with long term dependencies in them, tools.
  • To g.co/brainresidency people from many different backgrounds can come, stated that you have an interest in AI/ML
  • Learning tips: *TensorFlow tutorials *Geoff Hinton’s Coursera course *Vincent Vanhoucke’s Udacity course *Kaggle, a great site with lots of ML competitions *Deep Learning by Ian Goodfellow and Yoshua Bengio and Aaron Courville
  • You should probably use a GAN if you want to generate samples of continuous valued data or if you want to do semi-supervised learning, and you should use a VAE or FVBN if you want to use discrete data or estimate likelihoods.
  • In bilogy and genomics, they are involved in a variety of research projects in biology and genomics, such as predicting diabetic retinopathy status from fundus imagesidentifying cancerous cells in pathology images, using deep learning to call genetic variants in next-generation DNA sequencing data. We even have a recently-created Genomics team focused on applying TensorFlow, and extending it where necessary, to genomics problems. Other teams around Google and Alphabet, such as Google Accelerated SciencesVerily Life Sciences, and Calico, also apply deep learning techniques to biological data.
  • They like fast.ai and would complement it with the Deep Learning textbook, Elements of statistical learning. – Hugo Larochelle online course, the deep learning summer series, Blog posts like distill.pub, Sebastian Ruder’s blog.
  • You are welcome for Tensorflow
  • They keep up on what’s happening in the field by: Papers published in top ML conferences, Arxiv Sanity, “My Updates” feature on Google Scholar, Research colleagues pointing out and discussing interesting pieces of work, Interesting sounding work discussed on Hacker News or this subreddit

Tensorflow Object Detection API

Google has released an opensource framework built on top of Tensorflow, called the Tensorflow Object Detection API which is a tool for making it easy to make and deploy object detection models.

There are different state of the art types of models you can build. It you for instance make models using the Single Shot Multibox Detector (SSD) with MobileNets you will get lightweight models that you can run in real time on mobile devices.

The models you get are Single Shot Multiboc Detector, using MobileNets or Inception V2, RegionBased Fully Convolutional Networks with Resnet 101, and Faster RCMM with Resnet 101 or Inception Resnet v2.

You also get a Jupyter notebook for trying things out

 

If all these terms above makes no sense, you can read this excellent blog post explaining Deep Learning for Object Detection by Joyce Xu.

Why fast.ai switched from Keras and Tensorflow to Pytorch and built their own Framework on top of it

In the new fast.ai course they will be using pytorch instead of Tensorflow, and has built a framework on top of it to make it even easier to use than Keras.

Pytorch is a dynamic instead of static deep learning library and Jeremy Writes that nearly all of the top 10 Kaggle competition winners now have been using Pytorch.

In the part 2 of fast.ai course the focus was to allow student so read and implement recent research papers, and pytorch made this easier due to its flexibility. It allowed them to try out things you could not do as easily with Tensorflow. It also makes it easier to understand what is going on in the algorithms as with Tensorflow, the computation becomes a black box once you send it to the GPU.

Most models trains faster on Pytorch than on Tensorflow and are easier to debug contributing to faster development iterations.

The reason they built a framework on top of Pytorch is that pytorch comes with less defaults than Keras. They want the course one to be accessible for students with little or no experience in Machine learning. Also they wanted to help avoid common pitfalls (such as not shuffling the data when needed to or vice versa) and get you going much faster, improving where Keras was lacking. They also built in many best practices that Keras was lacking. Jeremy writes that:

“We built models that are faster, more accurate, and more complex than those using Keras, yet were written with much less code.” – Jeremy Howard

The approach is to encapsulate all important data choices such as preprocessing, data augmentation, test/training/validation sets, multiclass/singleclass classification, regression and so on into Object-Oriented Classes.

“Suddenly, we were dramatically more productive, and made far less errors, because everything that could be automated, was automated.” – Jeremy Howard

Jeremy thinks that deep learning will see the same kind of library/framework explosion that front end developers have been used to during that last years. So the library you learn today will probably be obsolete in a year or two.

99,3% accuracy on dogs and cats with 3 lines of code is not bad:

How To Learn Fast

In two days i was able to listen through half of cs231n in my spare time by listening on the youtube videos with higher than normal speed.

Nowadays i always listen to youtube videos with 2x or 3x speed.

With normal settings you can st the speed up to 2x. If you want to get the video faster than that you need to add a plugin or bookmarklet to achieve that.

You can drag these liks to your bookmarks bar, and get them as speed buttons to adjust the speed of your youtube videos…


x1 x2 x2.5 x3 x3.25 x3.5 x4


Also, check out this video on how to learn advanced concepts fast:

Hardware for DNN Tutorial slides

The Energy-Efficient Multimedia Systems (EEMS) group at MIT has a Tutorial on Hardware Architectures for Deep Neural Networks. Here is the website: http://eyeriss.mit.edu/tutorial.html

All Slides in one PDF.

The slides explain a lot of convolutional Nerual Networks and how they work. It also describes many different topics ranging from the architectures for the winners of ImageNet and how their success also correlate with the use of GPUs for processing. The later part is more on the details in computation, what is computed where and how.

If you wish to check out each individual topic, they are also splitted into several different slides:

  • Background of Deep Neural Networks [ slides ]
  • Survey of DNN Development Resources [ slides ]
  • Survey of DNN Hardware [ slides ]
  • DNN Accelerator Architectures [ slides ]
  • Advanced Technology Opportunities [ slides ]
  • Network and Hardware Co-Design [ slides ]
  • Benchmarking Metrics [ slides ]
  • Tutorial Summary [ slides ]
  • References [ slides ]

Want to train a deep neural net for a self driving car?

If you don’t have the time or money to spend on Udacitys Self Driving Car nanodegree, perhaps you want to try anyway to make a car drive by itself. Perhaps your real car is not ideal for training the algorithms, then you can use the provided simulator provided by Udacity that runs in Unity.

https://github.com/udacity/self-driving-car-sim

Check this blog pos tour for some tips: Training a deep learning model to steer a car in 99 lines of code

Stanford CS231n 2017 – Convolutional Neural Networks for Visual Recognition

The video lectures for Stanfords very popular CS231n (Convolutional Neural Networks for Visual Recognition) that was held in Spring 2017 was released this month. (According to their twitter page, the cs231n website gets over 10 000 views per day. The reading material on their page is really good at explaining CNNs)

Here are the video lectures:

 

These are the assignments for the course:

 

Also Make sure to check out last years student reports.  note: one is about improving the state of the art of detecting the Higgs Boson.

The maths you will need for AI/Machine Learning

This Hacker News thread discusses why and what kind of maths you will need if you pursue AI/Machine learning.

Here is a short summary, and i tend to agree. These where mandatory maths courses when i studied CS :

You need to have a solid foundation in:

Good  to know:

  • Graph theory or Discrete math. (no course on khan academy for that, but on great courses, which isn’t free)

Here are some books:

I like the following quote motivating why you for instance will need calculus:

Calculus essentially discusses how things change smoothly and it has a very nice mechanism for talking about smooth changes algebraically.
A system which is at an optimum will, at that exact point, be no longer increasing or decreasing: a metal sheet balanced at the peak of a hill rests flat.
Many problems in ML are optimization problems: given some set of constraints, what choices of unknown parameters minimizes error? This can be very hard (NP-hard) in general, but if you design your situation to be “smooth” then you can use calculus and its very nice set of algebraic solutions. – Commend by used Tel


It could bee very motivating for students when they first start with calculus, linear algebra and statistics if they have an idea in what fields they can practically use them later on.