{"id":235,"date":"2017-08-14T18:32:04","date_gmt":"2017-08-14T18:32:04","guid":{"rendered":"http:\/\/bskog.com\/ai\/?p=235"},"modified":"2017-08-14T18:32:04","modified_gmt":"2017-08-14T18:32:04","slug":"ai-wins-agains-the-best-professional-dota-players","status":"publish","type":"post","link":"http:\/\/bskog.com\/ai\/2017\/08\/14\/ai-wins-agains-the-best-professional-dota-players\/","title":{"rendered":"AI wins agains the best professional dota players"},"content":{"rendered":"<p>OpenAi developed an AI that wins agains the best professional dota 2 players in the world in 1-on-1 games. It does not use imitation-learning or tree search to learn. Instead it learns by playing agains a copy of itself continuously improving. The game is very complicated and if you would code the ai by hand you would maybe create a quite poor player. By having the computer to teach itself to play it learns a lot of tactics.<\/p>\n<p>read more at:<br \/>\n<a href=\"https:\/\/blog.openai.com\/dota-2\/\">https:\/\/blog.openai.com\/dota-2\/<\/a><\/p>\n<p>Here are tactics it learned by itself:<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAi developed an AI that wins agains the best professional dota 2 players in the world in 1-on-1 games. It does not use imitation-learning or tree search to learn. Instead it learns by playing agains a copy of itself continuously improving. The game is very complicated and if you would code the ai by hand &hellip; <\/p>\n<p class=\"link-more\"><a href=\"http:\/\/bskog.com\/ai\/2017\/08\/14\/ai-wins-agains-the-best-professional-dota-players\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;AI wins agains the best professional dota players&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21,24,29],"tags":[],"class_list":["post-235","post","type-post","status-publish","format-standard","hentry","category-news","category-openai","category-reinforcement-learning"],"_links":{"self":[{"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/posts\/235","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/comments?post=235"}],"version-history":[{"count":0,"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/posts\/235\/revisions"}],"wp:attachment":[{"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/media?parent=235"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/categories?post=235"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/bskog.com\/ai\/wp-json\/wp\/v2\/tags?post=235"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}