Intel Announces Copilot Will Work on Laptops - But Only Snapdragon Chips Can Support It

Intel Announces Copilot Will Work on Laptops - But Only Snapdragon Chips Can Support It

Copilot is deeply embedded in the Microsoft ecosystem and even has new keyboard keys dedicated to AI chatbots. But not with the new Intel AI PCs.

At the recent Intel AI Summit in Taipei, chipmaker executives told our sister publication Tom's Hardware that none of the current generation Intel Core Ultra chips meet the minimum requirements to run Copilot offline on a device ...

It all boils down to Trillion Operations per Second (TOPS), a measure of how fast an NPU can perform AI tasks, with higher numbers meaning better performance. An NPU with at least 40 TOPS is required, but at present Intel's Core Ultra chips have only 10 TOPS.

Currently, the only processor commonly used to run Windows that comes close to this performance is the Qualcomm Snapdragon X Elite with 45 TOPS. Intel states that the next generation Core Ultra chips will meet the minimum requirement.

At the moment, the processing of Copilot's impressive generative AI capabilities is done in the cloud, where data is sent to Microsoft's servers and a response is returned. In the long term, however, the goal is to have at least part of it on the device itself.

There are multiple reasons to run AI tools like Copilot locally, including privacy protection, security, offline access, and cost.

The challenge is having enough computing power to run elements of the Copilot experience without being noticed by the user due to machine slowdowns or rapid battery drain.

The battery issue is largely solved by pushing off much of the reasoning to the NPU rather than relying on the GPU. According to Todd Lewellen, vice president of Intel's client computing group, Microsoft is insisting that copilots be run on NPUs because of their impact on battery life.

I can install a large language model like Meta's Llama 2 on my M2 MacBook Air today; a mainstream application like Copilot, or a future version of Siri, would not accept this.

TOPS is the number of trillion operations a chip can process per second. Specifically, it is a measure of the arithmetic operations that a processor can perform per second, and is particularly useful as a measure of performance for AI and machine learning tasks.

In most cases, TOPS figures are given for the chip as a whole, including the CPU and NPU, but according to Intel, higher NPU figures are better for overall performance in LLM tasks. 21]

Lewellen told Tom's Hardware that once the NPU TOPS reaches 40, "you can do more things locally.

However, don't expect to be able to run everything in Copilot on your machine. Some additional features, such as some image generation and editing, will likely require an Internet connection and access to cloud services for some time.

"The general trend is to offload as much of [the AI process] as possible to the endpoint [on the device]," David Feng, vice president of Intel's client computing group, told Tom's Guide at MWC. Feng told Tom's Guide at MWC.

There are already a number of AI apps that run services locally, such as image and video editing, but they primarily use GPUs. Intel is actively working with developers to encourage better use of NPUs, but that may not happen until we get next-generation chips.

Meanwhile, Qualcomm has a solo run in the Windows AI space. Its often-delayed Snapdragon X Elite chip runs Windows, measuring 45 TOPS in the onboard NPU.

Qualcomm CFO Dom McGuire predicted at the time of the announcement that the low TOPS count on Intel chips would be a problem; at MWC in Barcelona in February, he told Tom's Guide that not much can be done with a few TOPS He said.

"All three different processor units can support more than 70 TOPS of AI performance," he explained.

He added that Intel's AI PC can only handle about 10 TOPS. At 10 TOPS, "there's not a lot you can do as an AI user experience."

Even Apple's top-of-the-line A17 Pro mobile chip and M2 Ultra desktop chip are limited to 34 TOPS, but tighter integration of the operating system and hardware can make better use of available processing power and spread the load throughout the system. can be achieved.

The answer is that AI can be run on laptops even today. In most cases, GPUs will be used, which will impact battery life and overall system performance. However, within the next year or two, with the advent of second-generation AI PCs and new software, local AI will become the norm.

What is certain is that we will be hearing more about TOPS in the coming years as companies seek to leverage NPUs' ability to perform complex calculations and AI processing without significantly impacting battery life.

.

Categories