New weapons unveiled in AI Arm-oury
Cambridge superchip architect Arm has revealed significant additions to its artificial intelligence (AI) technology platform.
They include new machine learning IP, the Arm Cortex-M55 processor and Arm Ethos-U55 NPU, the industry’s first microNPU (Neural Processing Unit) for Cortex-M, designed to deliver a combined 480x leap in ML performance to microcontrollers.
The new IP and supporting unified toolchain provide AI hardware and software developers with more ways to innovate as a result of unprecedented levels of on-device ML processing for billions of small, power-constrained IoT and embedded devices.
“Enabling AI everywhere requires device makers and developers to deliver machine learning locally on billions, and ultimately trillions of devices,” said Dipti Vachani, senior VP and general manager, Automotive and IoT Line of Business at Arm.
“With these additions to our AI platform no device is left behind as on-device ML on the tiniest devices will be the new normal, unleashing the potential of AI securely across a vast range of life-changing applications.”
As the IoT intersects with AI advancements and the rollout of 5G, more on-device intelligence means that smaller, cost-sensitive devices can be smarter and more capable while benefiting from greater privacy and reliability due to less reliance on the cloud or internet.
By delivering this intelligence on microcontrollers designed securely from the ground up, Arm is reducing silicon and development costs and speeding up time to market for product manufacturers looking to efficiently enhance digital signal processing and ML capabilities on-device.
Fellow Cambridge company and Arm partner, Audio Analytic, says its sound recognition technology will be at least 50 per cent faster on Arm's new Cortex-M55 processor.
Dr Dominic Binks, VP Technology at Audio Analytic said: “As an Arm partner, we had early access to Cortex-M55. We were impressed to see that our – already lean – sound recognition software (ai3™) will be at least 50 per cent faster on M55-based chips when compared to an Cortex-M4 and that’s before enabling M-Profile Vector Extensions (aka Helium), which then results in a further doubling of performance.
“As a result, we would expect a Helium-powered M55 device to be 4x faster over an Cortex-M4 at a like-for-like clockspeed on the workloads we present.
“Machine learning on endpoint devices, irrespective of their size, is critical to AI reaching every part of our lives. The performance enhancements made possible by this new chip and these new vector extensions will deliver a major step change in tinyML.
“There is considerable demand to run advanced AI, like sound recognition, at the edge of the network. Principally because cloud infrastructure is expensive and edge-based processing offers privacy benefits for end users.
“Now, thanks to Arm, next-generation consumer and IoT devices can deliver supercharged AI at even lower power and lower cost. The net result is being able to fit more features onto a device or being able to offer AI on an AA battery.
“With Helium, next-generation Cortex-M processors gain the vectorising capabilities of Neon-class processors, along with other new capabilities, which together deliver tangible improvements in performance, and cost, in the microcontroller space.”