The Journal of Economic Perspectives, which Timothy Taylor has been editing since its inception, has a symposium on robotics. One of the articles is by Gill A. Pratt.
The exponential growth in computing and storage performance has led researchers to explore memory-based methods of solving the perception, planning, and control problems relevant to the development of additional degrees of robot autonomy. Instead of decomposing these tasks into a set of hand-coded algorithms customized for particular circumstances, large numbers of memories of prior experiences can be searched, and a solution based on matching prior experience is used to guide response.
… human beings communicate externally with one another relatively slowly, at rates on the order of 10 bits per second. Robots, and computers in general, can communicate at rates over one gigabit per second—or roughly 100 million times faster. Based on this tremendous difference in external communication speeds, a combination of wireless and Internet communication can be exploited to share what is learned by every robot with all robots. Human beings take decades to learn enough to add meaningfully to the compendium of common knowledge. However, robots not only stand on the shoulders of each other’s learning, but can start adding to the compendium of robot knowledge almost immediately after their creation.
He does not predict when it will occur, but he thinks that at some point these sorts of capabilities will result in a rapid increase in robot intelligence.
This sounds like warmed over neural network machine learning. That said, I hope they start with a housekeeping robot.
Rule 34 would imply otherwise.
That’s what I said 😉
Isn’t a lot (if not most) human perception of a problem, condition, or circumstance based on prior experience from previous perceptions?
And how will a robot know what didn’t work without a human telling it?