The sustainable development goals of the UN, the imperative of governments to drive towards net zero, the ambitions of nations to affect a green energy transition… all are compromised by humanity’s insatiable appetite for faster, power-hungry computation. Or are they? The emergence of neuromorphic computers – inspired by biology and mimicking the neural systems of the human brain – promises extraordinary performance and energy efficiency.
Not only that, but neuromorphic computing is ideally suited to low-power edge AI applications. So it could unlock all kinds of novel commercial applications, involving everything from smart vision systems to autonomous robots. It was great to share my vision for the future of computing during a techUK online event. As I make clear in the video recording below, there’s much we can achieve by imitating the human brain – the most efficient and powerful computer of all.
Why do we need neuromorphic computers?
Conventional computing technology is based on so-called von Neumann architecture, where data processing and data transfer are carried out intensively and continuously. Next generation computers are expected to operate at the exascale with 1018 calculations per second. But the downside of von Neumann architecture is power consumption.
Data computation and transfer are responsible for a large part of this consumption and the rapid development of machine learning/AI neural network models are adding even more demand. As much as 10’s megawatts of power could be used for some AI learning algorithms on an exascale computer. Data-centric computing requires a hardware system revolution. The performance of the computing system, in particular the energy efficiency, sets the fundamental limit of AI/ML capability.
And this is where the advantages of neuromorphic computing comes in. It has the potential to achieve HPC and yet consumes 1/1000th of the energy.
So, what exactly is neuromorphic computing?
As I’ve mentioned, the neuromorphic approach takes inspiration from the human brain. It uses silicon artificial neurons to form a spiking neural network (SNN) that performs event-triggered computation.
There is a key difference between a SNN and other networks, such as the convolutional neural network (CNN). A SNN is formed by silicon artificial neurons and performs event-triggered computation. Spiking neurons process input information only after the receipt of the incoming spike signal. Spiking neural networks effectively attempt to make the neurons more like real neurons.
The process does not work in discrete time steps. Instead, it takes in events over a time series to help build up signals within the neurons. These signals accumulate inside the neurons until a threshold is passed, at which point it changes output state.
The benefits of neuromorphic computing
Clearly, a key benefit is substantially better energy efficiency over traditional von Neumann computing architecture. Ultra-low power operation can be achieved thanks to SNNs being effectively in an ‘off’ mode most of the time and only kicking into action when a change, or ‘event’, is detected.
Once in action, it can achieve fast computation without running an energy consuming fast clock by triggering a huge number of parallel operations (equivalent to 1000s CPU in parallel). Therefore, it consumes only a fraction of the power compared to CPU/GPU for the same workload.
This is why the future of neuromorphic computing is well suited to edge AI – implementing low power AI on end devices without connecting to cloud. This is especially so for TinyML applications that tend to focus on battery operated sensors, IoT devices and so on.
The neuromorphic trends
Next-generation neuromorphic systems are expected to have intrinsic capabilities to learn or deal with complex data just as our brain does. It has the potential to process large amounts of digital information with much lower power consumption than conventional processors.
Success in the field will depend on our increased understanding of the human brain. The goal is to learn as many lessons from biology as possible to achieve low energy consumption and high processing efficiency.
In the medium term, hybrid traditional computers with neuromorphic chips could vastly improve performance over conventional machines. In the longer term, fully neuromorphic computers will be fundamentally different and designed for specific applications, from natural language processing to autonomous driving.
When it comes to design, instead of the conventional architecture of portioning chips into processor and memory, the computer may be built with silicon ‘neurons’ performing both functions.
Building extensive ‘many-to-many’ neuron connectivity will allow an efficient pipeline for signal interaction and facilitate massive parallel operation. There is a trend to develop ever increasing amounts of electronic neurons, synapses and so on in a single chip.
The design approaches of neuromorphic processor chips follow broadly the following paths:
- FPGA and ASIC-based digital neuromorphic chip – offering highly optimised computation performance and configurability, tailored for application requirements. For artificial intelligence applications, it can potentially perform both inference and real-time learning
- Analog neuromorphic chip, including so called ‘in-memory-computing’ – has the potential to achieve the lowest power consumption. It would be mainly suited for machine learning inference rather than real-time learning
- Photonic integrated circuit (PIC) based neuromorphic chip – photonic computation can achieve very high speed at very low power consumption
Neuromorphic computing applications
We expect that neuromorphic computing will generate development opportunities in several technological areas, such as materials, devices, neuromorphic circuits and new neuromorphic algorithms and software development platforms – all crucial elements for the success of neuromorphic computing.
As I’ve said, low power edge computing represents another area of high commercial potential. As IoT applications in smart homes, offices, industries and cities proliferate, there is an increasing need for more intelligence on the edge as control is moved from data centres to local devices.
Applications such as autonomous robots, wearable healthcare systems, security and IoT all share the common characteristics of battery-operated, ultra-low power, standalone operation. Please reach out to me by email if you’d like to discuss any aspects of this topic. It’ll be great to hear from you.
Expert authors
Aidong holds over 30 years’ experience across diverse industries, including with some of the leading semiconductor companies.