Artificial Intelligence (AI) has long been a theme of Sci-fi blockbusters, but as technology develops in 2017, the stuff of fiction is fast becoming a reality. As technology has made leaps and bounds in our lives, the presence of AI is something we are adapting to and incorporating in our everyday existence. A brief history of the different types of AI helps us to understand how we got where we are today, and more importantly, where we are headed.
A Brief History of AI
Narrow AI – Since the 1950’s, specific technologies have been used to carry out rule-based tasks as well as, or better than, people. A good example of this is the Manchester Electronic Computer for playing chess or the automated voice you speak with when you call your bank.
Machine Learning – Algorithms which use large amounts of data to ‘train’ machines to properly identify and separate appropriate data into subsets that can be used to make predictions has been in use since the 1990s. The large amounts of data are basically allowing programming machines to learn rather than follow defined rules. Apple’s digital assistant, Siri, is one example of this. Machine translations for processes like web page translation is aso a common tool.
Deep Learning – At the cutting-edge branch of machine learning technology since the 2000s, the algorithm structures of deep learning AI are based on human brains. Using a system of artificial neural networks, machines are able to learn from past actions and apply that understanding to solve new problems without being specifically programmed to do so.
General AI – This represents the future of AI, and it refers to complex machines built on a similar structure to our own neural networks in the brain that allow a machine to independently complete ‘general’ intelligent actions with similar characteristics to human intelligence. These are the machines of science fiction: C-3PO and R2D2 from Star Wars, Baymax from Big Hero 6 or even Rachael from the 1982 classic, Blade Runner.
Why is Deep Learning at the Forefront of Machine Learning?
Deep learning allows machines to learn independently just as human beings learn. Our brains contain around 100 billion neurons, all connected to thousands of neighboring neurons in a massive network. A neuron passes the information it has to its neighbor as an electrical charge, thus forming new pathways that either help or hinder a person from learning new things. With each experience, synaptic connections between pairs of neurons grow stronger or weaker based on use through trial and error.
Over nearly two decades, AI has been revolutionized as artificial networks mimic the concept of learning from the stimulus in the world around them (like humans) instead of the way conventional computers learn from rules and algorithms. With enough data, machines that have multiple layers of neural connections (like our brains) can make classifications and predictions on their own – with a little trial and error.
This process is called training, and to be effective it needs the artificial neural networks to be stimulated by millions of pieces of real-world input until the system is tuned precisely enough to pass on the correct information almost every time. Today, effective training of these neural networks is not only possible, it is being implemented. We’ve also learned that after adequate training, these neural network machines can extract patterns, detect valuable trends, and even predict certain events in the future.
The Machine Translation Breakthrough
All machine translation services used Narrow AI before deep learning was a possibility. Each service followed the same set of simple rules: #1 separate sentences into fragments, #2 look the words up in a dictionary of vocabulary terms, #3 apply post-processing rules to correctly arrange the translated fragments into meaningful sentences.
We’ve all seen the results of this technology and some of the awkward translations provided that happen because of the restricted and specific rules of Narrow AI Languages have many exceptions to the rules, which is partly why Narrow A.I can’t understand a joke, spot sarcasm, or understand the cultural context. Deep learning, on the other hand, opens up new opportunities for accurate machine translation, and Google Translate has taken it to a whole new level.
Google Translate announced their switch from a Narrow AI system in September 2016 to a single multilingual system based on artificial neural networks. The system, named Google Neural Machine Translation (GNMT), continuously learns from millions of linguistic examples, allowing for a single system to translate between multiple languages and sound much more natural than and Narrow AI translation ever could.
Thanks to deep learning, GNMT’s creators expected the new Google Translate to improve over time, but were surprised at just how quickly the Neural Machine Translation has developed. Google is calling a recent experiment a zero-shot translation in their Research Blog, and it’s more than just a little impressive.
Where will Zero-Shot Translation Lead AI?
Google researchers believe that zero-shot translation was actualized by the AI system inventing its own internal language, or ‘interlingua’, based on contextual concepts and sentence structures instead of relying on word-to-word equivalents. So, words and sentences with shared meanings are internally represented by GNMT in comparable ways no matter their original language.
This gives us a powerful glimpse of computers in the future being able to generate their own creations to aid themselves in completing tasks they weren’t specifically trained to do. How this translates into every realm of industry, from more efficiently localized ecommerce to greater ease of patent translation, has significant consequences outside of the realm of person-to-person translations.
Once machines can learn from and replicate human communication, they will irrefutably pass the Turing Test. This milestone will bring us another leap closer to a world with general AI that incorporates characters like C-3PO and Rachael in everyday life.