The Vision of DevOps for Devices, Part 5: Operationalizing AI at the Edge

Sudhir Reddy
|
Try Esper for Free
MDM Solutions for Android and iOS

In this final post, we’re bringing together all of the concepts we discussed in earlier posts — managing by exception, drift detection and remediation, software deployment, and compliance enforcement — to talk about operationalizing Artificial Intelligence and Machine Learning (AI and ML) at the edge. 

More specifically, we will discuss AI on edge devices, the complexities of delivering AI models to the edge, and how Esper is uniquely suited to help make AI dreams a reality for any business with a dedicated device fleet. 

The AI Operationalization Problem: Model Delivery at the Edge

It seems like every tech-first company (and what company isn’t these days?) is looking for a way to leverage AI. This isn’t just a buzzword in the industry, either — it’s the future. AI at the edge is no different. In modern businesses, there are numerous benefits to integrating AI into your edge devices: enhanced privacy, improved efficiency, scalability, enhanced user experiences, and so on. What makes it even more challenging, however, is delivering that AI model to the edge in a repeatable, predictable, and scalable way. 

The issue here is twofold: Edge devices often need to run local AI models for latency issues and bandwidth constraints, and delivering content to them is challenging. It’s quite the pickle. 

For example, let’s say you run a manufacturing facility and rely on a series of cameras to QA the product assembly process. These strategically-positioned cameras analyze the product from various angles and can detect anomalies or malfunctions in real time. But how can they do this with precision and accuracy without transmitting data back and forth to and from the cloud, which often takes several seconds? With localized AI running on the LAN!

Another example here is if you run a restaurant loyalty program and want to deploy your loyalty model to the restaurant. Integrating AI into self-ordering kiosks (and tying it to the loyalty program) allows you to streamline ordering by offering smart suggestions based on the customer’s ordering history or specific allergens, leveraging location-based promos, offering birthday rewards, etc. — all automatically.

The question in all these scenarios is pretty clear: how do you get there? 

This scenario starts with a large AI model trained and running in the cloud. It uses machine learning to improve its ability to identify manufacturing anomalies, continuously improving its job. When it reaches an accuracy level within the company's acceptable parameters, the core parts are put into a smaller model that can run on the LAN (or even directly on devices in some cases!) — no need for the cloud. 

But therein lies the problem. How can you get that model to the edge network, especially with the foresight that these models will need continuous updates? AI is driven by continuous improvement, and you must continuously improve your model on the edge to operationalize AI efficiently. 

The AI Model Delivery Solution: Streamlined Deployments

Your distribution model is just as critical as your AI model itself — after all, what good is AI if you can’t reasonably deploy and update it? That’s the first problem you must address when building an AI strategy.

Fortunately, AI models are conceptually not much different from apps — in fact, small models are often packaged into apps! These models rely on libraries of information and are often packaged together (on the edge, at least). So, the problem is the same one that has plagued edge devices for a long time: content delivery. 

You need some of the tools we discussed in earlier posts to effectively manage content like apps and AI models on the edge. More specifically: 

  • Pipelines: AI developers use CI/CD (continuous integration/continuous delivery) pipelines to automate testing, versioning, and packaging for reliable model updates. We use the same philosophy to enable you to deliver those updated AI models to the edge. 
  • Containerization: To prevent device-specific dependencies, containerization is necessary. This makes your AI more portable, allowing it to run on the edge. SLMs are ideal for offline processing and lower latency — instead of running in the cloud, they’re part of the overall container. 
  • Testing and rollback: Testing goes hand-in-hand with pipelines, but a repeatable practice for testing AI model updates is necessary before widely rolling out new models. And in the case of failure or other issues, a reliable rollout back method is critical. 

With the right tooling, delivering AI models and updates becomes a non-issue, enabling your dev team to focus on improving the model and IT team on more strategic technology choices. 

Operationalizing AI at the Edge has Never Been Easier

With Esper, your AI goals become realities. We can help with every part of the deployment and update process — from testing to rollout (or rollback). We built our platform to address problems just like this. Our modern DevOps approach means you get everything you need for AI model deployment: 

  • Blueprints: This is what device management and optimization at scale look like. You can use Blueprints for everything from settings tweaks to content management and, of course, your AI/ML models on any of your devices — then enforce that state as aggressively as you want. 
  • Pipelines: CI/CD pipelines are how DevOps engineers streamline app development and updates. It’s also how we enable scalable app, content, and AI model deployment. With Pipelines, you can test your new AI model on a few devices and then roll it out in stages to the rest of your fleet as needed. Esper Pipelines is the key to successfully pushing updates to your AI model at scale, no matter how big the deployment. 
  • App and content libraries: With the Esper Cloud, you can ensure that you always have the latest apps, AI models, and content on any device in your fleet at any time. When you combine the Esper Cloud with Blueprints, you can easily enforce compliance across your devices, ensuring they’re always running the latest policies, app versions, and AI models. 

Operationalizing AI at the edge can be challenging, but it doesn’t have to be. Using Esper, you can deploy and update your AI model with confidence. 

But wait. There’s more:

As the world of AI evolves and improves, we are incorporating a lot of this functionality into our AI capabilities — not only in delivering models to the edge but also in building capabilities within the Esper platform that will benefit our customers. It's a bit early to talk about these yet, but stay tuned for some great news on this front!

FAQ

No items found.
No items found.
Sudhir Reddy
Sudhir Reddy
Sudhir is Esper's Chief Technology Officer. He's a hands-on technologist that brings a unique blend of business acumen, product innovation, development of large-scale DevOps platforms, and execution capabilities to Esper.
Sudhir Reddy
Learn about Esper mobile device management software for Android and iOS
Featured resource
Read more
Featured resource

Esper is Modern Device Management

For tablets, smartphones, kiosks, point of sale, IoT, and other Android and iOS edge devices.
MDM Solutions