What is Machine Learning?

David Ruddock
|
Try Esper for Free
If you’re looking for the springboard to help you take your device fleet to the next level by getting ahead of the curve, this webinar is for you.

Machine learning (ML) is the practice of using mathematical algorithms to “teach” a computer system to perform a task autonomously and with continuous improvement in outcome, with the end goal of creating some form of artificial intelligence (AI). Machine learning is a broad topic that encompasses dozens (if not hundreds) of subfields and use cases, ranging from neural networks, generative AI, computer vision, deep learning, and predictive recommendations to speech pattern recognition and business analytics.

Unlike many computing concepts, machine learning doesn’t exactly have a bright line defining an implementation, or specific technological requirements underlying it — machine learning is really a term that just describes a family of methods that act as the means to an end (building AI).

Machine Learning vs AI

Machine learning is a field of — or more accurately, an approach to creating — AI. Machine learning is also the only AI implementation in widespread use today. This has led to ML and AI being used interchangeably as terms. ML always refers to (a way to build) AI, and AI nearly always refers to AI created using ML. In the future, other implementations of AI could potentially exist — but none meaningfully do now. Almost all other pathways to AI remain academic theories, and machine learning has overwhelmingly emerged as the only viable real-world approach to AI to date.

You can think of the difference between AI and ML as the difference between “transportation” — the concept of moving people or goods over distances — and “mode of transportation” — the means by which transportation is achieved. If you had to define “transportation” to someone, they’d have no better sense of how transportation actually works, or what is required to achieve transportation in practice. It’s more instructive to describe transportation by land, sea, or air — examples of modes of transportation — than it is to describe transportation itself. Similarly, defining AI doesn’t tell you much about how the concept works in a practical sense. In other words: AI is “why,” and machine learning is the “how.”

Machine Learning vs Neural Networks and Deep Learning

Neural networks (neural nets) and deep learning both describe technical implementations of machine learning — they’re the particular types of transportation in our “modes of transportation” analogy; the trains, planes, and boats of our AI transit universe. Once you’ve decided you want to use AI to solve a problem, you decide on an AI implementation (machine learning), and then you decide on the framework best suited to the challenge. Most often, this means using a neural net, though other methods definitely exist and are in use.

Neural networks are, in a very basic sense, designed around the functional model of a human brain’s neurons. Many “nodes” are connected in a very large network, and data inputs flow through that network. If the values representing that data (often, many large matrices of numerical values) cause a given neuron in that network to become “activated,” that neuron passes on the data to the next layer of neurons in the network. Each neuron in the network has a particular set of values and thresholds which result in its activation, and the particular frequency, distribution, and relatedness of all activated neurons in a network are used in the determination of an outcome. Neural networks with many layers (technically, 3 or more) are called deep learning networks.

Imagine you start with a picture of an eastern bluebird, and you give the numerical representation of this image to a neural network (a process called ML ingestion). This particular neural network has been trained to identify 1,000 species of birds and, over time, should more accurately and confidently identify the birds upon which it has been trained. If the image activates neurons in that network within the known threshold for eastern bluebird, the resulting analysis should yield a result for “eastern bluebird.” But if you feed this network an image of a scarlet macaw and it has never seen a big red, blue, green, and yellow parrot, it will not be able to make a determination of species, no matter how many times the network runs — a neural network cannot make a positive determination for which it has no existing basis. A human must manually intervene and label the new data. From there, the neural net can start to develop a numerical “knowledge” of what may constitute a scarlet macaw. 

This leads us to one of neural nets' biggest weaknesses: While they do possess the ability to “learn,” that ability is premised upon the availability of good training data and, often, good labeling of that data—both of which are often in short supply relative to the AI industry’s needs.

Machine Learning Model vs Large Language Model (LLM)

Machine learning algorithms are frequently called machine learning models. Neural networks are the most common family of ML models deployed today (linear regression, decision tree, and cluster models are other ML model types). One of the most-hyped implementations of such a neural network is the Large Language Model, or LLM. This means that LLMs are a type of machine learning model.

First, let’s put LLMs in context of the larger discussion. If AI is the challenge of transportation (e.g., shipping a load of potatoes across the country), ML is the mode of transportation (such as land), and neural networks are the types of vehicles employed (let’s say a train). With LLMs, we’re now getting into the engine room: Does your train run on steam, diesel, or electricity? Each will have advantages and disadvantages, but they’re no longer immediately obvious — this is complicated stuff.

LLMs are neural networks built using the transformer model architecture, a particular implementation of a neural net. Transformer models are incredibly technical, but the most important thing to know about them — and about LLMs — is that they are extremely good at understanding, and thus predicting, relationships in sequential data. This is especially applicable to the understanding and processing of language, whether speech or text. If you imagine a sentence (“The quick brown fox jumped over the lazy dog”) as a series of complex numerical values, an LLM using transformers could ingest a query like “what’s that sentence about the fast fox and the dog” and output the well-known pangram in response with a high degree of confidence.

Common Machine Learning Applications

Now that we have a sense of how the architecture of machine learning can be conceptualized (AI -> Machine learning -> Neural nets -> Types of neural nets), let’s look at some common use cases in that context.

  • Computer vision: Computer vision is probably the most rigorously studied and consistently advancing field in machine learning, having been used for decades. From simple on-screen character recognition (i.e., text recognition or OCR) to detecting microfractures in ultra-sensitive industrial equipment, computer vision is one of the GOAT machine learning use cases. More advanced implementations like neural nets are moving computer vision forward fast, too, with emerging uses like medical imaging, scientific analysis, and more.
  • Generative AI: You’d be living under a rock if you hadn’t heard about this one. Machine learning is what powers tools like ChatGPT and MidJourney, as well as tons of other AI products that render voice, video, animation, text, and more. Neural nets and transformer models, specifically, are a huge part of this latest AI revolution.
  • Natural language processing: Whether you’re chatting with Siri, Google, or Alexa, these virtual assistants all use machine learning and deep neural networks to power their speech recognition (and output) capabilities. 
  • Your phone’s keyboard: Both Android and iOS devices use machine learning to power the predictive text suggestions for the keyboard, and increasingly do so with algorithms running directly on the device itself.
  • Gaming: Machine learning has been applied to generative content creation to some video games.
  • Product recommendation: If you’ve ever wondered why Amazon suggests you should buy a rubber duck after you order some bubble bath, thank machine learning: Neural nets are a great fit for understanding how behaviors on websites like past purchases or browsing history can sell a related product or service.

What’s the Future of Machine Learning?

Machine learning as an academic field is constantly progressing, and it seems likely we’ll reap increased performance, accuracy, and efficiency from ML models as both public and private research continues. The big “promise” of machine learning that remains unrealized — and that may never arrive — is “general artificial intelligence,” or gAI. gAI posits a future in which AI not only learns (i.e., self-corrects), but exceeds the typical cognitive performance of a human being in a wide range of benchmarks. Many machine learning specialists claim that architectures like neural networks are technically infeasible for building gAI, and that some fundamentally new technology must emerge. Others, like OpenAI, seem to suggest gAI is a matter of “when,” not “if.” It’s hard to know who to believe.

But in the immediate future, AI’s advancement in the field of computer vision, especially, seems promising. The abundance of visual data in the world — and the relative ease of collecting it — means that many computer vision models will have more data than they can even ingest for the foreseeable future. The major challenge is labeling that data so that it can be ingested in a useful way by the many specialized computer vision models already deployed out in the world (e.g., medical imaging, automated driving, industrial QA, retail, robotics, and factory floors).

FAQ

No items found.
No items found.
David Ruddock
David Ruddock

David's tech experience runs deep. His tech agnostic approach and general love for technology fueled the 14 years he spent as a technology journalist, where David worked with major brands like Google, Samsung, Qualcomm, NVIDIA, Verizon, and Amazon, reviewed hundreds of products, and broke dozens of exclusive stories. Now he lends that same passion and expertise to Esper's marketing team.

David Ruddock
Learn about Esper mobile device management software for Android and iOS
Featured resource
Read more
Featured resource

Esper is Modern Device Management

For tablets, smartphones, kiosks, point of sale, IoT, and other Android and iOS edge devices.
MDM Software