This is how Tian and Zhu begin. They start with a database of some 250,000 real Go games. They used 220,000 of these as a training database. They used the rest to test the neural network’s ability to predict the next moves that were played in real games.
Pointer from Tyler Cowen. Quite a while ago, I thought that computers could master Go, in part by doing something like this.
On another AI note, this article says,
Tenenbaum and colleagues tested the approach by having both humans and the software draw new characters after seeing one handwritten example, and then asking a group of people to judge whether a character was written by a person or a machine. They found that fewer than 25 percent of judges were able to tell the difference.
To me, the program did not sound like the great breakthrough that the media are touting. But I am not an expert in the field of AI, just an opinionated observer.
The results in the first article are not particularly surprising. If you feed a deep learning algorithm 220,000 examples of human played games, its going to do okay. This is almost standard stuff at this point.
The second article REALLY IS a breakthrough. Having computers learn from extremely limited examples is (one of the) holy grails of AI.