I attended a talk and two panels, one of which was moderated by Alex Tabarrok.
1. Susan Athey said that companies like Facebook and Google are learning rapidly by doing many large randomized controlled trials. This gives them a way to lever their leadership positions. It suggests that “deep learning” might boost economies of scale.
2. Colin Allen suggested that if self-driving cars are programmed to stop for pedestrians, and pedestrians know this, pedestrians could become more reckless and aggressive. Hmmm.
There’s a fine free Coursera course on Machine Learning (by Andrew Ng, now at China’s Baidu) that can teach a lot even if the (quite tough) problems are not solved.
I’m quite sure that the large randomized trials of many potential customers will result in temporarily (/permanently?) more effective advertisement, as well as better search results.
However, for creating Artificial Intelligence to solve problems that humans aren’t solving so well right now, the key need is massive training — multiple input variations which are all “correct”. After this comes post training input with “correct” output known, to be compared with what the AI outputs. And even after this, the AI can encounter input outside of the training data which is then hugely misinterpreted.
In the machine learning course, one learns that tech and algorithms are dominated by size of training data.
My own work-hobby is teaching an AI to become an English Tutor, so I’m learning more about IBM’s Watson right now (lots of free info & courses), but it’s going pretty slowly, since I do have a day job, plus my blog reading addiction. Getting input training for any AI course is a huge undertaking – definitely economy of scale. However, there’s also a big space for open data “training data”.
The 60 million games of GO that were used to train the AI GO player are probably somewhat or very much publicly available. Similarly Chess games.
I still haven’t heard of a “game playing” AI that can play various poker and other card games, as well as chess & go, at a very high level.
Japanese children were filmed deliberately getting in the way of mobile robots (which were trying to serve food?) when the parents weren’t watching. Robots programmed to stop for pedestrians will be blocked, some, and those that can not defend themselves WILL be attacked, when authorities are not looking.
Now I’m thinking that if there are cameras which activate “in self defense”, the non-attacking robot might capture and broadcast pictures of the offenders, including facial recognition and Facebook identification plus (small?) fines for harassment. Perhaps merely public shaming would be enough to reduce the human anti-robot antics to a low enough level to be easily tolerable.
“Colin Allen suggested that if self-driving cars are programmed to stop for pedestrians, and pedestrians know this, pedestrians could become more reckless and aggressive.”
In turn suggesting that a utilitarian* AI might decide to deliberately take one of these reckless pedestrians out, pour decourager les autres…
*(wrt human utility, that is).
I’ve already seen the reckless pedestrian thing in Mountain View.
(2) good! Turning over city streets to cars was a contentious mistake, largely forgotten https://www.researchgate.net/publication/236825193_Street_Rivals_Jaywalking_and_the_Invention_of_the_Motor_Age_Street
What exactly does deep learning have to do with large randomized controlled trials. Deep learning doesn’t mean large datasets. The experiments can (and probably should) be analyzed without any AI or ML techniques.