Advanced computer programs influence, and can even dictate, meaningful parts of our lives. Think of streaming services, credit scores, facial recognition software.
As this technology becomes more sophisticated and more pervasive, it’s important to understand the basic terminology.
People often use “algorithm,” “machine learning” and “artificial intelligence” interchangeably. There is some overlap, but they’re not the same things.
We decided to call up a few experts to help us get a firm grasp on these concepts, starting with a basic definition of “algorithm.” The following is an edited transcript of the episode.
Melanie Mitchell, Davis professor of complexity at the Santa Fe Institute, offered a simple explanation of a computer algorithm.
“An algorithm is a set of steps for solving a problem or accomplishing a goal,” she said.
The next step up is machine learning, which uses algorithms.
“Rather than a person programming in the rules, the system itself has learned,” Mitchell said.
For example, speech recognition software, which uses data to learn which sounds combine to become words and sentences. And this kind of machine learning is a key component of artificial intelligence.
“Artificial intelligence is basically capabilities of computers to mimic human cognitive functions,” said Anjana Susarla, who teaches responsible AI at Michigan State University’s Broad College of Business.
She said we should think of “AI” as an umbrella term.
“AI is much more broader, all-encompassing, compared to only machine learning or algorithms,” Susarla said.
That’s why you might hear “AI” as a loose description for a range of things that show some level of “intelligence.” Like software that examines the photos on your phone to sort out the ones with cats to advanced spelunking robots that explore caves.
Here’s another way to think of the differences among these tools: cooking.
Bethany Edmunds, professor and director of computing programs at Northeastern University, compares it to cooking.
She says an algorithm is basically a recipe — step-by-step instructions on how to prepare something to solve the problem of “being hungry.”
If you took the machine learning approach, you would show a computer the ingredients you have and what you want for the end result. Let’s say, a cake.
“So maybe it would take every combination of every type of food and put them all together to try and replicate the cake that was provided for it,” she said.
AI would turn the whole problem of being hungry over to the computer program, determining or even buying ingredients, choosing a recipe or creating a new one. Just like a human would.
So why do these distinctions matter? Well, for one thing, these tools sometimes produce results with biased outcomes.
“It’s really important to be able to articulate what those concerns are,” Edmunds said. “So that you can really dissect where the problem is and how we go about solving it.”
Because algorithms, machine learning and AI are pretty much baked into our lives at this point.
Columbia University’s engineering school has a further explanation of artificial intelligence and machine learning, and it lists other tools besides machine learning that can be part of AI. Like deep learning, neural networks, computer vision and natural language processing.
Over at the Massachusetts Institute of Technology, they point out that machine learning and AI are often used interchangeably because these days, most AI includes some amount of machine learning. A piece from MIT’s Sloan School of Management also gets into the different subcategories of machine learning. Supervised, unsupervised and reinforcement, like trial and error with kind of digital “rewards.” For example, teaching an autonomous vehicle to drive by letting the system know when it made the right decision — like not hitting a pedestrian, for instance.
That piece also points to a 2020 survey from Deloitte, which found that 67% of companies are already using machine learning, and 97% were planning to in the future.
IBM has a helpful graphic to explain the relationship among AI, machine learning, neural networks and deep learning, presenting them as Russian nesting dolls with the broad category of AI as the biggest one.
And finally, with so many businesses using these tools, the Federal Trade Commission has a blog laying out some of the consumer risks associated with AI and the agency’s expectations of how companies should deploy it.