
In 1971, Intel Corporation released the very first commercially produced microprocessor, the Intel 4004. 50-plus years later, computer technology has come a long way. Processors have evolved to reach greater performance heights, leveraging multi-core layouts and hybrid designs consisting of Performance and Efficiency cores to drive throughput across your system. With the rise of AI technologies, companies like Intel, Qualcomm, and others have added a new type of processing node with the Neural Processing Unit or NPU. Naturally, this begs the question: What are these processing nodes and what do they do? Let’s find out!
What is an NPU?
A Neural Processing Unit is a type of core design that simulates the human brain’s own neural network and can be found within several processors. NPUs are built directly into a processor’s architecture to boost performance, typically handling sustained, heavily used AI workloads. You can find NPUs in devices like Microsoft’s 13” Surface Pro Copilot+ PC with the Qualcomm Snapdragon X Elite processor.
These NPUs operate at a low power profile to balance efficiency without compromising the overall performance of your system, making them great for completing professional and creative tasks.
Technical Aspects
Prior to the arrival of NPUs, programs and operations used to run strictly on your system’s Central Processing Unit (CPU) or Graphic Processing Unit (GPU). This compromised your available resources which would then be used to run various AI tasks like Stable Diffusion. With NPUs, your CPU and GPU are free to perform at their best! This leaves more resources available for other essential tasks that your CPU and GPU may need to run, which also reduces battery drainage. Users also get on-device operations while maintaining low temperatures. Dedicated NPUs also provide on-device operations without producing a lot of heat.
NPUs also come into play with Windows Copilot+ AI application. Using generative AI features based on the Prometheus model, Copilot+ answers queries with platforms including ChatGPT-4, ChatGPT-4o, and Dall-E 3. This allows you to fully explore various concepts from streamlining professional tasks to finding answers that you couldn’t quite pinpoint.
A Growing Technology
NPU technology is constantly growing and we are only scratching the surface of what these processors can do. For example, Copilot+ requires an NPU that operates at least 40 Trillion Operations Per Second (TOPS). Early Intel Meteor Lake Ultra Series 1 hardware operates at 11 TOPS, but the release of Lunar Lake Ultra Series 2 based systems such as the MSI Prestige 13 EVO AI+ Copilot+ PC provides users with 48 TOPS, which is ample power to run the full Copilot+ suite.
AMD also has NPUs in their Ryzen Pro 7000 and 8000 Series processors, but they also lack the power needed to be fully compatible with Copilot+. Fortunately, AMD is bringing a full 50 TOPS of AI processing power with their own unique Ryzen AI 300 Series processors to be released at a later date.
While we are still in the early phases of NPU technology, the outcome so far has been quite promising and will only become more impressive as time goes on. Developers are already streamlining productivity and reducing hardware strain without compromising the performance of a system. With Microsoft ready to integrate their Copilot+ software into other facets of their technologies, it is an incredibly exciting time to see what the future brings.