Unity Machine Learning Agents. Super awesome, possibly terrifying

Unity has released a new SDK supporting machine learning agents in the Unity gaming engine. This enables you to:

  • Study complex multi agent behaviors in realistic competitive and cooperative scenarios. This is a lot safer than doing it with robots.
  • Study multi-agent behavior for industrial robotics, autonomous vehicles and other applications in a realistic environment.
  • Create intelligent agents for your games.

https://blogs.unity3d.com/2017/09/19/introducing-unity-machine-learning-agents/

The benefits of this is that it will be a lot easier to develop and test learning algorithms that can later be used in real life. There is also a potential danger. In the same way that we can test industry robots, autonomous vehicles etc that can then be ported into the real world. We will inevitably see very smart ai agents that drive opponents in realistic war games. These can also be ported into the real world.

Since deep reinforcement learning can beat any human player in any game, the more realistic the game gets, the scarier it gets to imagine what would happen if you plug such an ai into some fighter jets or autonomous tanks.

Google brain AMA 2017 TL;DR

The google brain team did an AMA (Ask me anything) on Reddit. This is the tl;dr:

  • They think PyTorch (made by people at Facebook) is great and that they did a good job with it. And that it is good that many people make Machine Learning libraries. You also learn from each other when developing your library.
  • Some of the hurdles in machine learning is to make deep networks stable and that many of the new breakthroughs in ML such as GANs or DeepRL are still to have their ‘batch normalization’ moment (that one idea that makes everything work without having to fight it). Also moving away from supervised learning will be difficult. Another challenge is to make systems that solve many problems instead of one.
  • Geoffrey Hintons capsules are coming along fine. They have a paper in nips on it.
  • They talked about some failures and stuff that hadn’t worked.
  • Their work days involve a lot of reading papers.
  • They recommend using the highest level API that solves your problem, then you get best practices for free
  •  The line between AI engineer and research scientist is blurry.
  • Give researchers access to more computation power and they will accomplish more.
  • PhD scientists go through the same interview pipeline as all devs
  • Robotics will benefit from the fact that we now have perception
  • A good way to learn is to read papers and re-implement them. If you want to lear a variety of ML topics, pick papers that cover different topics such as image classification, language modeling, GANs etc. If you want to become an expert in one subfield, pick a bunch of related papers
  • People are excited about: efficient large-scale optimization, building a theoretical foundation for deep learning, Human/AI Interaction, bridging the gap between real world and simulation, imitation learning, generatin long structured documents with long term dependencies in them, tools.
  • To g.co/brainresidency people from many different backgrounds can come, stated that you have an interest in AI/ML
  • Learning tips: *TensorFlow tutorials *Geoff Hinton’s Coursera course *Vincent Vanhoucke’s Udacity course *Kaggle, a great site with lots of ML competitions *Deep Learning by Ian Goodfellow and Yoshua Bengio and Aaron Courville
  • You should probably use a GAN if you want to generate samples of continuous valued data or if you want to do semi-supervised learning, and you should use a VAE or FVBN if you want to use discrete data or estimate likelihoods.
  • In bilogy and genomics, they are involved in a variety of research projects in biology and genomics, such as predicting diabetic retinopathy status from fundus imagesidentifying cancerous cells in pathology images, using deep learning to call genetic variants in next-generation DNA sequencing data. We even have a recently-created Genomics team focused on applying TensorFlow, and extending it where necessary, to genomics problems. Other teams around Google and Alphabet, such as Google Accelerated SciencesVerily Life Sciences, and Calico, also apply deep learning techniques to biological data.
  • They like fast.ai and would complement it with the Deep Learning textbook, Elements of statistical learning. – Hugo Larochelle online course, the deep learning summer series, Blog posts like distill.pub, Sebastian Ruder’s blog.
  • You are welcome for Tensorflow
  • They keep up on what’s happening in the field by: Papers published in top ML conferences, Arxiv Sanity, “My Updates” feature on Google Scholar, Research colleagues pointing out and discussing interesting pieces of work, Interesting sounding work discussed on Hacker News or this subreddit

Why fast.ai switched from Keras and Tensorflow to Pytorch and built their own Framework on top of it

In the new fast.ai course they will be using pytorch instead of Tensorflow, and has built a framework on top of it to make it even easier to use than Keras.

Pytorch is a dynamic instead of static deep learning library and Jeremy Writes that nearly all of the top 10 Kaggle competition winners now have been using Pytorch.

In the part 2 of fast.ai course the focus was to allow student so read and implement recent research papers, and pytorch made this easier due to its flexibility. It allowed them to try out things you could not do as easily with Tensorflow. It also makes it easier to understand what is going on in the algorithms as with Tensorflow, the computation becomes a black box once you send it to the GPU.

Most models trains faster on Pytorch than on Tensorflow and are easier to debug contributing to faster development iterations.

The reason they built a framework on top of Pytorch is that pytorch comes with less defaults than Keras. They want the course one to be accessible for students with little or no experience in Machine learning. Also they wanted to help avoid common pitfalls (such as not shuffling the data when needed to or vice versa) and get you going much faster, improving where Keras was lacking. They also built in many best practices that Keras was lacking. Jeremy writes that:

“We built models that are faster, more accurate, and more complex than those using Keras, yet were written with much less code.” – Jeremy Howard

The approach is to encapsulate all important data choices such as preprocessing, data augmentation, test/training/validation sets, multiclass/singleclass classification, regression and so on into Object-Oriented Classes.

“Suddenly, we were dramatically more productive, and made far less errors, because everything that could be automated, was automated.” – Jeremy Howard

Jeremy thinks that deep learning will see the same kind of library/framework explosion that front end developers have been used to during that last years. So the library you learn today will probably be obsolete in a year or two.

99,3% accuracy on dogs and cats with 3 lines of code is not bad:

AI wins agains the best professional dota players

OpenAi developed an AI that wins agains the best professional dota 2 players in the world in 1-on-1 games. It does not use imitation-learning or tree search to learn. Instead it learns by playing agains a copy of itself continuously improving. The game is very complicated and if you would code the ai by hand you would maybe create a quite poor player. By having the computer to teach itself to play it learns a lot of tactics.

read more at:
https://blog.openai.com/dota-2/

Here are tactics it learned by itself:

DeepMind and Blizzard releases Starcraft II as an AI research environment

AIs learning to play atari games are very impressive, beating Go champions was an eye opener to the world. Now DeepMind together with Blizzard releases Starcraft II as an ai research environment
It will be very interesting to see what happens and to try it out.

I have attempted at creating AI scripts for Age of Empires II (which is the best game ever btw) and there are quite good scripts for it. It is however limited by the API that the scripting engine in AOE2 has, and there the scripts are just looped over and over again and if a condition is met, that particular rule is executed.

In this case, you will get a half a million anonymized game replays, a machine learning API, a connection between DeepMinds toolset and Blizzards API.

It will be very interesting to see how deep learning can take on this.
I can imagine we will se pro-like reactions to be used agains user tactics. When you are scripting an ai for instance for AOE2, you need to take a whole bunch of tactics into account. And once you know how an ai script behaves you can easily beat it. Even thought the “new” ai script made for the newest releases for AOE2HD are considered very difficuly, you can beat it by tower rushing it, making it impossible for the computer to gain an economy advantage since the towers keep them form gathering resources. The benefit of the AI is often that it can multitask.

I can imagine that deep with reinforcement learning the computer will generate tactics to counter pro gamers. I quess however, that it will take a year or two before we see deep learning beat pro-gamers.

I hope to see some very interesting games…

On the other hand. I am not sure that i think it is that very good to put the efforts of AI research into developing war strategy machine learning.

Here is the paper.

Andrew Ng’s deeplearning.ai has released 5 deep learning specialization courses on Coursera

Andre Ng announced that he has launched five new courses in Deep Learning on Coursera.

The courses range from 2-4 weeks of study per course where you put in 3-6 hours of study per week per course.

  1. Neural Networks and Deep Learning
  2. Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
  3. Structuring Machine Learning Projects
  4. Convolutional Neural Networks
  5. Sequence Models

The courses will earn you a certificate and are described as follows:

In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.

Hybrid Reward Architecture breaks world record AI and Human for Pac Man

Maluuba, a microsoft bought up AI startup achieved the highest possible score (999 990) for the very difficult to beat Ms Pac-Man. It used a divide and conquer like reinforcement learning where responsibilities for each positive and negative reward giving elements in the game are assigned an individual agent that seeks to suggest to the player a move that is best for reaching that particular local goal. A “manager” agent recieves these suggestinos and decides what move the user shall perform in order to achieve the maximum reward.

http://www.maluuba.com/blog/2017/6/14/hra

 

Read the paper here