If you’ve been – or still are – confused about artificial intelligence and machine learning, you are in good company. There seems to be a lack of a bright-line distinction between what is and isn’t AI. Moreover, everyone is slapping on the label “AI” or “machine learning” where it doesn’t belong – and that includes using the terms interchangeably.
We want to clear things up – at least as much as possible.
Artificial intelligence, simply stated, is the concept of machines being able to perform tasks that seemingly require human intelligence. Machine learning is one current application of AI. It involves giving machines (computers) access to a wealth of data and letting them learn for themselves.
Most – but importantly, not all – experts would say all machine learning is AI, but not all AI is machine learning. The image is less a Venn diagram and more concentric circles. The large outside circle is AI, and the inner circle is machine learning. (Inside of that is “deep learning,” but we’ll save that for another day.)
For a more practical example, consider self-driving cars. “You’d be surprised to hear that some of the self-driving cars that currently describe themselves as using AI, use very little machine learning and are mostly using rule-based systems,” explains Xavier Amatriain, former director of Netflix recommendation algorithms and former VP of engineering at Quora.
It sounds clear-cut, right? It isn’t. In fact, we should note here that there are those who don’t accept that all machine learning is AI. But we are going try to keep it simple.
Let’s start by explaining what each is.
Machine Learning vs. Artificial Intelligence
Machine learning, as we said, is an application of artificial intelligence. Algorithms are fed data and asked to process it without specific programming. In 1959, Arthur Samuel, a – perhaps the – pioneer in machine learning, defined it as a “field of study that gives computers the ability to learn without being explicitly programmed.”
Machine learning algorithms – like humans – learn from their errors to improve performance. At the heart of machine learning are efficient pattern recognition and self-learning. ML applications automatically learn and improve from experience without being explicitly programmed. They can evaluate data in real-time and change their behavior accordingly.
The results are most accurate when the machine has access to massive amounts of data to refine its algorithm, which is why the internet was crucial to the development of machine learning. (More on this later.)
There are two general types of machine learning.
- Supervised: The key to this is well-labeled “training” data, which “teaches” the machine. The labeled data gives information on the parameters of the desired categories and lets the algorithm decide how to classify them. (Example: identifying spam.)
- Unsupervised: In this type of learning, no training data is provided. The algorithm analyzes a set of data for patterns or commonalities, then sorts and categorizes accordingly. (Example: finding similarities among a company’s most valuable customers.)
Unsupervised machine learning is considerably more complex and has therefore been used in fewer applications – so far, big data consultant Bernard Marr explains. “But this is where a lot of the excitement over the future of AI stems from. When people talk about computers learning to ‘teach themselves,’ rather than us having to teach them (one of the principles of machine learning), they are often alluding to unsupervised learning processes.”
Gartner calls machine learning one of the hottest concepts in technology. “The capability to transform data into actionable insight is the key to a competitive advantage for any organization,” says Carlton Sapp, research director at Gartner. “However, the ability to autonomously learn and evolve as new data is introduced – without explicitly programming to do so – is the holy grail of business intelligence.”
So where does artificial intelligence come into play? Machine learning is currently the most promising application of AI, especially for businesses.
From the beginning, the connection to human intelligence was clearly articulated. Scientists at the 1956 Dartmouth Artificial Intelligence Conference contended the following: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
They appear to have been right.
AI can be general or narrow. A recent Hacker Noon piece puts it succinctly: “Narrow AI is where we have been. General AI is where we are going.”
Narrow AI (sometimes called “weak” AI, although it’s anything but) exhibits some aspects of human intelligence but is lacking in other areas. A machine that’s great at recognizing images or playing chess or predicting the weather falls into this category.
General AI (sometimes called “strong” AI) has all the characteristics of human intelligence and is able to understand and reason, just like you or me. “General AI has always been elusive. We’ve been saying for decades that it’s just around the corner,” notes Ben Dickson, founder of TechTalks.
But it’s been top of mind (or at least, the imagination) for decades. General AI is what gives rise to so much dystopian cinema and literature. Again, from Hacker Noon:
“The ideal of General AI is that the system would possess the cognitive abilities and general experiential understanding of its environments that we humans possess, coupled with the ability to process this data at much greater speeds than mortals. It follows that the system would then become exponentially greater than humans in the areas of knowledge, cognitive ability, and processing speed–giving rise to a very interesting species-defining moment in which the human species are surpassed by this (now very) strong AI entity.”
Putting it in Context: A Brief History
Artificial intelligence and machine learning have similar origin stories.
Many trace the genesis of AI to a Dartmouth research project in 1956 that explored such topics as problem-solving and symbolic methods. John McCarthy has been credited with coining the term “artificial intelligence” that same year. McCarthy and Arthur Samuel theorized it would be possible to program a machine to learn about the surrounding environment instead of creating preprogrammed representations of the world. In the 1960s, the U.S. Department of Defense recognized the value of training computers to mimic human reasoning. (The Defense Advanced Research Projects Agency (DARPA) used it to for street mapping projects and digital personal assistants long before the private sector did. )
Machine learning also took off in the late 1950s. Marr identifies two important breakthroughs that led to the emergence of machine learning as we know it today.
- The realization in 1959 that rather than teaching computers everything they need to know, “it might be possible to teach them to learn for themselves.” (This is when Arthur Samuel coined “machine learning.”)
- The emergence of the internet, and the dramatic increased amount of data available for analysis. (It’s worth noting that the internet was also a DARPA project.)
This made possible all sorts of machine learning research. “Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world,” says Marr.
AI vs. Machine Learning: Cutting Through the Confusion
Artificial intelligence and machine learning are closely related; most AI applications use machine learning – or will soon. It’s no surprise, then, that the terms are used loosely and interchangeably. If it’s still unclear, here’s one way to think about it, courtesy of WiredUK: “You need AI researchers to build the smart machines, but you need machine learning experts to make them truly intelligent.”
And we’ll add this: You need to understand both to compete. If you want that competitive edge – or if you simply want to learn more – we can help. Get in touch.