23 February, 2018 - 14:14 By News Desk

Under the hood of Project Trillium

Imagine you’re 30 metres down, diving above a reef surrounded by amazing-looking creatures and wondering what species the little yellow fish with the silver stripes is, writes Jem Davies, Fellow and General Manager, Machine Learning, Arm (and qualified scuba diver).

You could fumble around for a fish chart, if you have one, but what you really want is an easier and faster solution. Fast forward and technology has provided. Now your waterproof smartphone is enabled by Arm Machine Learning (ML) and Object Detection processors. Your experience is very different.

Your dive mask is relaying information in real-time via a vivid heads-up display. An Arm-based chip inside your smartphone is now equipped with an advanced Object Detection processor that is filtering out the most important scene data while an operating system tasks a powerful Machine Learning processor with detailed identification of fish, other areas of interest and hazards.

The information you’re receiving is intelligently filtered, so you’re not overwhelmed with data. This is exactly what Arm’s Project Trillium and our new ML technologies will enable and much, much more.

We are launching Project Trillium to kickstart a new wave of invention in the world of artificial intelligence (AI), of which machine learning is a key part. Getting to this point is the result of significant and prolonged investment from Arm to enable the kind of future devices we and our partners see on the horizon.

As we see edge machine learning introduced rapidly into more products, we expect to see a world in which most ‘things’ are equipped with a new level of smartness. Indeed, my answer to the question: ‘Why would you introduce more intelligence into your device?’ is ‘Why wouldn’t you?’

In my opinion, the growth of machine learning represents the biggest inflection point in computing for more than a generation. It will have a massive effect on just about every segment I can think of. People ask me which segments will be affected by ML and I respond that I can’t think of one that won’t be.

Moreover, it will be done at the edge wherever possible, and I say this because I have the laws of physics, the laws of economics and many laws of the land on my side. The world doesn’t have the bandwidth to cope with real-time analysis of all the video being shot today, and the power and cost of transmitting that data to be processed in the cloud is simply prohibitive.

Google realised that if every Android device in the world performed three minutes of voice recognition each day, the company would need twice as much computing power to cope.

The world’s largest computing infrastructure, in other words, would have to double in size. Also, demands for seamless user experiences mean people won’t accept the latency (delay) inherent in performing ML processing in the cloud. And, to be reliable, ML cannot be dependent on a stable Internet connection, especially when it is governing safety-critical operations. In addition to the technical logic, laws and user expectations on privacy and security mean that most people prefer to keep their data on their device. That is backed up by the findings of the AI Today, AI Tomorrow report we sponsored in 2017. Project Trillium will make that possible.


Image courtesy – Arm

What will Project Trillium introduce that doesn’t exist in the marketplace now?

Project Trillium represents a suite of Arm products that gives device-makers all the hardware and software choices they need. 

It also enables a seamless link into a bank of Arm partners delivering neural network (NN) apps including leading frameworks such as Google TensorFlow, Caffe, Android NN API and MXNet.

The architecture behind the Arm ML processor is purpose-built to be as efficient as possible, and it is completely scalable. It enables the processor, in its launch form, to run almost five trillion operations per second (TOPs) within a mobile power budget of just 1-2 Watts, making it equal to the most challenging daily machine learning tasks. 

That performance can go even higher in real-world use. This means devices using the Arm ML processor will be able to perform ML independent of the cloud. 

That’s clearly vital for products such as dive masks but also important for any device, such as an autonomous vehicle, that cannot rely on a stable internet connection.

Today, the technologies within Project Trillium are optimized for the mobile market and smart IP cameras, as that is where edge ML performance is being demanded by device-makers. But as plans to deploy ML across a diverse range of mainstream markets mature, Arm ML technologies will scale to suit requirements.

We already see devices running ML tasks on Arm-powered devices in products such as smart speakers featuring keyword spotting. This will continue and expand rapidly.

At the high end, there is ML inference (analysing data using a trained model) being performed in connected cars and servers, and we have an ability to scale our technologies to suit those applications too.

We now have an ML processor architecture that is versatile enough to scale to any device, so it is more about giving markets what they need, when they need it. This gives us, and our ecosystem partners, the speed and agility to react to any opportunity.

As well as the Arm ML processor, we also have its cousin the Arm Object Detection (OD) processor. It is a second-generation device, with the first generation computer vision processor already deployed in Hive security cameras.

The OD processor can detect objects from a size of 50x60 pixels upwards and process Full HD at 60 frames per second in real time. It can also detect an almost unlimited number of objects per frame, so dealing with the busiest coral reef, or soccer stadium, is no problem.

Newsletter Subscription

Stay informed of the latest news and features