Google ‘brain simulator’ uses 16,000 computers to identify a cat

Jun 27, 2012 | Search engine marketing

As part of Google’s ongoing project to make technology more ‘human’, the web giant has created an artificial intelligence system capable of teaching itself the ‘concept of cats’ with no human help. Google scientists created one of the largest neural networks for machine learning, by connecting 16,000 computer processors, which they turned loose on the […]

As part of Google’s ongoing project to make technology more ‘human’, the web giant has created an artificial intelligence system capable of teaching itself the ‘concept of cats’ with no human help. Google scientists created one of the largest neural networks for machine learning, by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own. Unsurprisingly, it found a lot of cat videos, and began inventing the concept of a cat with no human help.


google%20ai1.jpg
The ‘brain’ simulation was exposed to 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognise pictures of cats using a “deep learning” algorithm.
This was despite being fed no information on distinguishing features that might help identify one.
Picking up on the most commonly occuring images featured on YouTube, the system achieved 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats.
Most machine vision technology depends on having humans “supervise” the learning process by labelling features. In this case the computer was given no help in identifying features.
The project is being conducted in Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses.
The Google team was lead by Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean. Dr Dean explained the system; “We never told it during the training, ‘This is a cat,’ it basically invented the concept of a cat. We probably have other ones that are side views of cats.”
Dr Ng added: “The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data”.
The Google computer assembled a dream-like digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images (see image above, and the human version below).
google%20ai2.jpg
This is exactly what takes place in the brain’s visual cortex, and how biological brains learn recognition.
The 16,000 CPU computer network is still dwarfed by the capability of the human brain and the researchers said “It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses”.
Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.
Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company’s search business and related services.
Potential applications include improvements to image search, speech recognition and machine language translation.
This week the researchers will present the results of their research at a conference in Edinburgh, Scotland.

All topics

Previous editions