Edge AI: How to Execute Your AI Goals on Edge Devices

Amar Balutkar
Jordan Con
|
Try Esper for Free
MDM Solutions for Android and iOS

At this point, nearly every technology discussion revolves around AI, and it’s no different for devices and device management. Nearly all of our customers say that they’re thinking carefully about how AI can add value to their business.

We’re thinking about how AI can enhance MDM workflows, but more importantly, how modern MDM can help customers achieve their edge AI objectives by deploying models to their company owned and managed devices.

One example is around device telemetry and anomaly detection. If you ingest a lot of telemetry data, is that something you can do? With the right AI tools, it can be.

But in my role in sales engineering, talking to prospects and customers, I see that nearly all enterprises are looking to adopt AI into their products. Initially, these AI initiatives all start in the cloud. Someone will deploy an AI inference engine into their cloud, it works, and the concept is successfully proven. But the reality is you can’t scale with that. You need to run the inference locally at the edge to satisfy a number of negative factors, like latency and data costs.  

An easy example is an edge video camera streaming 4K or even 1080p video continuously for hours a day. That’s going to quickly exhaust your bandwidth and cost a lot of money, and that goes without mentioning the extra time it takes to upload and process the data. When real-time data processing is crucial, every second matters. So instead, the next step is to push the workload to the edge and run your AI models locally on your devices for nearly instant data processing. 

New Challenges Running AI at the Edge

That’s not to say running AI at the edge is perfect. 

How do you protect your model? Security becomes important in a whole new way. What’s the security posture of the device that guarantees nobody is going to steal your AI model? When multiple apps each have their own AI model on the same device, how do you partition the data and ensure only you have access to it? Unfortunately, there isn’t a clear cut answer to any of these questions. There are a number of countermeasures, like controlled access, encryption, and reliable OTA updates, so it’s more of a collection of actions rather than a single answer. 

Another challenge is ensuring that your AI models are never fixed or operate in a silo — that you can constantly update them. There’s an entire feedback loop of pulling the new data and updating your training. There are many AI tools and frameworks that have been built around this concept, but when you are deploying it at the edge, you need a mechanism to get that data from the device and make it available as part of your feedback loop. You need new solutions to solve that.

Large companies build their own tools to solve this problem specifically for their hardware and infrastructure — essentially creating a custom MDM capable of integrating AI workloads. But a medium sized business or companies that aren’t tech-first likely can’t invest that much in building bespoke infrastructure. That’s where platforms like Esper come in. They give you the necessary tooling to deploy your AI workload on the edge — your models, your applications, your files, your parameters. And it gives you a mechanism to pull data out and use it to retrain, and then again re-deploy the updated model back to your devices, creating that continuous feedback loop.

Operationalizing Edge AI with AI DevOps

The feedback loop mechanisms that we discussed are all about operationalization. And you can really think about that as AI DevOps — the deep integration of AI development with operations.

Operationalization goes beyond building the model. Even if you build the model — and it’s easy to test and validate it in the cloud — for device operators, you’re not done. You still need to figure out how to deploy that to a large fleet of devices that may be in cabs throughout the city, in airports, or in your restaurant franchises around the world. And again, it’s not deploying once. It’s deploying and re-deploying updated models as fast as your organization needs the model to evolve. 

Think of it another way — AI models need to evolve with any single data point received from the device fleet. That means super fast feedback loops, where the model gets deployed, solves some problem, pushes the result of the problem back to a central system, and then that data is used to retrain the next iteration of the model. It’s a never-ending cycle of analysis, processing, and improvement, continuously integrated and delivered across your device fleet. 

In the ideal scenario, this feedback loop happens instantaneously, but we’re currently at the opposite end of the spectrum. That’s because operationalization is a real challenge. Right now, it’s nearly impossible for smaller companies to solve this and compete with the bigger ones who built in-house solutions.

Speaking of resources, companies need to get to this state of fast feedback loops and do it cost-effectively. Even if companies could deploy updated models multiple times a day, would they do it? To answer that question, they need a lot of operational data to figure out the tradeoffs between the benefits of deploying faster and the cost associated with it.

Any MDM solution targeting Edge AI use cases needs to address all of these points to enable operationalization for AI to run on devices.

Managing Edge Devices — At Scale

When you're running AI models on a device fleet, how do you ensure each device is in compliance with your specific configuration?

Read More

FAQ

No items found.
No items found.

Keep Exploring

No items found.
Amar Balutkar
Amar Balutkar

Amar leads the pre-sales and solution engineering team at Esper. He has 15+ years of experience in building & leading engineering teams across a variety of technology domains ranging from mobile devices/edge devices connected to cloud with a focus on IoT, Security and AI. He previously worked at QuEST Global and Motorola Mobility.

Amar Balutkar
Jordan Con

Jordan is the Senior Director of Growth & Product Marketing at Esper and has over a decade of experience in B2B SaaS marketing, including about half of it with IoT and edge devices. He is passionate about how cutting-edge technology solves real-world challenges.

Learn about Esper mobile device management software for Android and iOS
Featured resource
Read more
Featured resource
The Beginner’s Guide to DevOps for Devices
Even tech-first companies may find transitioning from traditional Mobile Device Management to DevOps for Devices challenging. Discover how with this guide.
Download the Guide

Esper is Modern Device Management

For tablets, smartphones, kiosks, point of sale, IoT, and other Android and iOS edge devices.
MDM Solutions