The Energy-Efficient Multimedia Systems (EEMS) group at MIT has a Tutorial on Hardware Architectures for Deep Neural Networks. Here is the website: http://eyeriss.mit.edu/tutorial.html
All Slides in one PDF.
The slides explain a lot of convolutional Nerual Networks and how they work. It also describes many different topics ranging from the architectures for the winners of ImageNet and how their success also correlate with the use of GPUs for processing. The later part is more on the details in computation, what is computed where and how.
If you wish to check out each individual topic, they are also splitted into several different slides:
- Background of Deep Neural Networks [ slides ]
- Survey of DNN Development Resources [ slides ]
- Survey of DNN Hardware [ slides ]
- DNN Accelerator Architectures [ slides ]
- Advanced Technology Opportunities [ slides ]
- Network and Hardware Co-Design [ slides ]
- Benchmarking Metrics [ slides ]
- Tutorial Summary [ slides ]
- References [ slides ]
Regardless of your approach, running deep learning requires resources. One of the reason of it’s current success is that the two last parts of the three key components have emerged.
The three key components are universal algorithm, data and compute.
Compute is available, but at a cost.
There are three possibilities i am considering.
– buy or build your own GPU powered server, for about $1700.
– continue on AWS p2 for about a dollar an hour, supported with an old desktop computer with a decent computer (it is slightly slower than the p2, and runs out of memory occasionally)
– Follow this tutorial and get an AWS p2 as an EC2 Spot instance, where you bid for “unused” resources. According to the author of this post, the AWS bills are a tent of the one for the normal AWS p2.
The most fun alternative is to build your own machine, make it upgradable and don’t have to worry about forgetting to shut down the running cloud instance.
This free course by Udacity and NVIDIA teaches you how to get your mind around and do parallell programming with the GPU. You will use the CUDA programming environment (that you also use in deep learning) in order to use the GPU for your processing. You will have access to high-end GPU machines. Things you will learn centers around image processing.