Security could be the use case AI PCs need

AI PCs have struggled to find a perfect use case, but the need for stronger security systems on local desktops, along with agentic AI might end that struggle.

In an era where interest in AI PCs is soaring and injecting life into what was a stagnant endpoint device market, the enthusiasm has been met with a lack of clearly defined use cases.

The early use cases touted by hardware and software vendors often revolved around unified communications and collaboration. While that's a great way to demonstrate discrete hardware and its positive impact on audio and video quality, it was solving a problem that didn't require solving, at least for most users. It was cool, but not "drop everything, we have to have this" cool.

Across the industry, we've gone back and forth trying to decide whether a broad, must-have AI PC use case will emerge -- I wrote as much in 2023. Or perhaps AI will just quietly infiltrate everything we do, which is the position I eventually came around to. These days, I find myself somewhere in the middle. I know it'll be useful -- but I'd still like to see a truly compelling reason for widespread adoption.

No matter where we look, we're not finding what we thought would be there. When we look for broad, everyone's-got-to-have-it use cases for having local AI resources, we just can't find them. Some are emerging, such as security and agentic AI, and when we look to the existing places that AI is being used on the endpoint, they're almost invariably using a cloud-based service. These cloud-based services are incredibly useful, widespread in use, and are driving tangible benefits just about everywhere you look. But cloud services don't use local AI, so why do we need local AI?

Over the past 12-18 months, use cases have emerged in support of local AI, but with mixed usefulness and response. Copilot+ launched with Recall, which was received with a reaction best described as a cross between "just because you can, doesn't mean you should," and "oh, h--- no!" Others have touted the ability to build models using open source large language models (LLMs) and disseminate finely tuned smaller models to end users -- mostly developers, but there are use cases outside of this, too.

The problem is that training your own model is:

  • Costly.
  • Difficult to keep up with the rapid pace of innovation that cloud-scale LLMs are setting.
  • Susceptible to become out of date quickly.
  • Need to be retrained often, so the cycle repeats.

So where does that leave us? I'm trying to fight off that "solution looking for a problem" feeling. That sounds harsh, but I used an AI PC for two months in my regular office-worker job and the only time I tickled the neural processing unit (NPU) meter was when I used Teams.

But not all is lost. In fact, broad use cases are emerging in the form of security, which might very well be the universal use case and justification we've been looking. It could help anchor AI PC usefulness while other use cases evolve alongside AI PC adoption, like agentic AI.

Security and agentic AI emerging as AI PC use cases

Before we move on, it's worth defining the AI PC, since I'm often asked, "Isn't my machine with a beefy GPU an AI PC already?" I recently heard someone from Intel define it this way, and I liked it enough to try to paraphrase it here:

An AI PC is one that has dedicated hardware divided up for specific purposes. The CPU is suited for quick and lightweight tasks. The GPU is meant for data-intensive AI operations. And the NPU is an "AI accelerator" for workloads that need to run consistently on the system in a low power way.

So, a GPU alone can enable AI PC workloads in the same manner that a sledgehammer can drive in a nail. It's just that GPUs are expensive and not needed in all situations. An AI PC and its NPU, is sort of in the middle between a CPU and a GPU. If you are an AI researcher or are working in ways that require a ton of AI resources, an AI PC isn't going to move the needle much. You'll still need GPUs. But for the rest of us, NPUs can be beneficial, and we're starting to see more ways this can happen.

So, a GPU alone can enable AI PC workloads in the same manner that a sledgehammer can drive in a nail. It's just that GPUs are expensive and not needed in all situations.

Security

Consider the audio and video touchups the lowest branch on the AI PC tree -- the next level up is endpoint security. In fact, endpoint security that uses local AI is one of the things that I'll be looking for at RSA Conference this year.

I was disappointed last year when the AI endpoint security angle could be summed up in one word: chatbots. This year, I've already seen emerging uses, like ESET's announcement about leveraging Intel NPUs, moving some workloads to the NPU when appropriate, increasing speed and reducing the impact on system resources. I'm sure they're not alone in that regard, and I hope to learn as much as I can at RSAC.

Agentic AI

Next on the tree of AI is agentic AI, which is the buzzword of 2025. The thing about agentic AI is that while its eventual usefulness is off the charts, there are so many angles that must be considered before using it. If the agents are truly independent of end users -- meaning fully autonomous agents acting on behalf of the organization itself as opposed to end users -- there are security, identity, compliance and trust confidence issues that need to be overcome. This will happen, but it will be slow.

The middle ground for agentic AI could be at the endpoint, where agents work on behalf of the end users themselves to accomplish tasks. An agent could file your expenses, compile TPS reports, build a go-to-market plan based on key inputs and meeting notes, etc.

It's the latter use case that could benefit from local AI. Yes, there will always be cloud-based -- or maybe organizationally centralized -- services that can do this. But offloading some of the more menial things to the endpoint would free up cloud resources for more intensive or big-picture things.

Conclusion

While we wait for the killer app that makes AI the Excel of the modern era AI's "Excel moment," to borrow some phrasing from a recent interview with Satya Nadella comparing AI agents to how the PC changed corporate forecasting workflows due to Excel. It's nice to see use cases emerging that are useful.

I recently had the opportunity to learn more about how ESET is using AI PCs to improve its endpoint security products, so look for a post in the next few days about that. And after RSAC, I'll hopefully have a lot more interesting, tangible uses for AI PCs to share.

This is part 1 of a series on AI PC use cases. Find part 2 here.

Gabe Knuth is the senior end-user computing analyst at Enterprise Strategy Group, now part of Omdia.

Enterprise Strategy Group is part of Omdia. Its analysts have business relationships with technology vendors.

Dig Deeper on Desktop management