Regardless of your approach, running deep learning requires resources. One of the reason of it’s current success is that the two last parts of the three key components have emerged.
The three key components are universal algorithm, data and compute.
Compute is available, but at a cost.
There are three possibilities i am considering.
– buy or build your own GPU powered server, for about $1700.
– continue on AWS p2 for about a dollar an hour, supported with an old desktop computer with a decent computer (it is slightly slower than the p2, and runs out of memory occasionally)
– Follow this tutorial and get an AWS p2 as an EC2 Spot instance, where you bid for “unused” resources. According to the author of this post, the AWS bills are a tent of the one for the normal AWS p2.
The most fun alternative is to build your own machine, make it upgradable and don’t have to worry about forgetting to shut down the running cloud instance.