Major websites all over the world use a system called CAPTCHA to verify that someone is indeed a human and not a bot when entering data or signing into an account. CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” The squiggly letters and numbers, often posted against photographs or textured backgrounds, have been a good way to foil hackers. These are annoying but effective.
The days of CAPTCHA as a viable line of defense may, however, be numbered.
Researchers at Vicarious, a Californian artificial intelligence firm funded by Amazon founder Jeffrey P. Bezos and Facebook’s Mark Zuckerberg, have just published a paper
documenting how they were able to defeat CAPTCHA using new artificial-intelligence techniques. Whereas today’s most advanced AI (artificial intelligence) technologies use neural networks that require massive amounts of data to learn from (sometimes millions of examples), the researchers said their system needed just five training steps to crack Google’s reCAPTCHA technology. With this, they achieved a 67% success rate per character — reasonably close to the human accuracy rate of 87%. In answering PayPal and Yahoo CAPTCHAs, the system achieved an accuracy rate of greater than 50%.
See also: The Insurer of the Future – Part 3
The CAPTCHA breakthrough came hard on the heels of another major milestone from Google’s DeepMind team, the people who built the world’s best Go-playing system. DeepMind built a new AI system called AlphaGo Zero that taught itself to play the game at a world-beating level with minimal training data, mainly using trial and error — in a fashion similar to how humans learn.
Both playing Go and deciphering CAPTCHAs are still clear examples of what we call narrow AI, which is different than Artificial General Intelligence (AGI) — the stuff of science fiction. Remember R2-D2 of “Star Wars,” Ava from “Ex Machina” and Samantha from “Her?” They could do many things and learned everything they needed on their own.
The narrow AI technologies are systems that can only perform one specific type of task. For example, if you asked AlphaGo Zero to learn to play Monopoly, it could not, even though that is a far less sophisticated game than Go; if you asked the CAPTCHA cracker to learn to understand a spoken phrase, it would not even know where to start.
To date, though, even narrow AI has been difficult to build and perfect. To perform very elementary tasks such as determining whether an image is of a cat or a dog, the system requires the development of a model that details exactly what is being analyzed and massive amounts of data with labeled examples of both. The examples are used to train the AI systems, which are modeled on the neural networks in the brain, in which the connections between layers of neurons are adjusted based on what is observed. To put it simply, you tell an AI system exactly what to learn, and the more data you give it, the more accurate it becomes.
The methods that Vicarious and Google used were different; they allowed the systems to learn on their own, albeit in a narrow field. By making their own assumptions about what the training model should be and trying different permutations until they got the right results, they were able to teach themselves how to read the letters in a CAPTCHA or to play a game.
This blurs the line between narrow AI and AGI and has broader implications — in robotics and in virtually any other field in which machine learning in complex environments may be relevant.
See also: Seriously? Artificial Intelligence?
Beyond visual recognition, the Vicarious breakthrough and AlphaGo Zero success are encouraging scientists to think about how AIs can learn to do things from scratch. And this brings us one step closer to coexisting with classes of AIs and robots that can learn to perform new tasks that are slight variants on their previous tasks — and ultimately the AGI of science fiction.
So R2-D2 may be here surprisingly sooner than we expected.