December 04, 2017

I, for one, will welcome our new computer overlords...

Still don't think that the Singularity is underway? Well, check our the next thing in AI, as reported by Science Alert:
In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that's capable of generating its own AIs.
More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a 'child' that outperformed all of its human-made counterparts.
The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
For this particular child AI, which the researchers called NASNet, the task was recognising objects - people, cars, traffic lights, handbags, backpacks, etc. - in a video in real-time.
AutoML would evaluate NASNet's performance and use that information to improve its child AI, repeating the process thousands of times.
When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call "two of the most respected large-scale academic data sets in computer vision," NASNet outperformed all other computer vision systems.
Being able to automate the task of programming teaching automation systems is obviously the next level here, and has the potential to greatly accelerate the process of developing and deploying machine learning systems that outperform anything currently in use. This has the potential to rapidly and significantly improve all of our current machine learning systems and applications, of course, including e.g. self-driving cars and other autonomous vehicles (or, Autos).

It also adds an extra layer of opacity to the process; Google is already heavily dependent on effectively black box algorithms, and struggling to explain decisions made in an emergent manner by systems that are already too complex to be easily understood by the humans who nominally designed them, and this new technology will result in systems that we barely understand designing systems, on their own, that humans won't understand at all... and eventually trusting them to manage all the nuts and bolts of our institutions. This raises some pretty obvious concerns, and Science Alert's article  doesn't overlook them:
Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what's to prevent the parent from passing down unwanted biases to its child?
What if AutoML creates systems so fast that society can't keep up? It's not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.
It's at this point that I'd like to say that I, for one, will welcome our new computer overlords. I now return you to the Singularity, already in progress.