Significant Energy Savings Thanks to Neuromorphic Hardware – News Artificial Intelligence and Robotics

The Institute for Theoretical Computer Science at TU Graz and Intel Labs have shown experimentally for the first time that a large neural network can process sequences like sentences while consuming four to sixteen times less energy when working on neuromorphic hardware than on non-neuromorphic hardware. The new research is based on Intel Labs’ Loihi neuromorphic research chip, which uses insights from neuroscience to design chips that function similarly to those in the biological brain.

The research was funded by the Human Brain Project (HBP), one of the largest research projects in the world with over 500 scientists and engineers across Europe studying the human brain. The research results are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” (DOI 10.1038/s42256-022-00480-w), published in intelligence of natural machines.

The human brain as a model

Intelligent machines and intelligent computers capable of autonomously recognizing and inferring objects and relationships between different objects are the subject of artificial intelligence (AI) research worldwide. Energy consumption is a major hurdle on the way to more widespread use of these AI methods. It is hoped that neuromorphic technology will provide a nudge in the right direction. The neuromorphic technology is modeled after the human brain, which uses energy very efficiently. To process information, its hundred billion neurons use only about 20 watts, little more energy than an average energy-saving light bulb.

In research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and capture the relationships between objects or people in context. The tested hardware consisted of 32 Loihi chips.

Loihi research chip: up to sixteen times more energy efficient than non-neuromorphic hardware

“Our system is four to sixteen times more energy efficient than other AI models on conventional hardware,” says Philipp Plank, doctoral student at the Institute for Theoretical Computer Science at Graz University of Technology. Plank anticipates further efficiencies as these models migrate to the next generation of Loihi hardware, which will dramatically improve chip-to-chip communication performance.

“Intel’s Loihi research chips promise advances in AI, including reducing its high energy costs,” said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “Our work with Graz University of Technology provides further evidence that neuromorphic technology can improve the energy efficiency of today’s deep learning workloads by rethinking its implementation from a biological perspective. »

Mimics human short-term memory

In their neuromorphic network, the group simulated an alleged memory mechanism of the brain, as Wolfgang Maass, Philipp Plank’s doctoral advisor at the Institute for Theoretical Computer Science, explains: “Experimental studies have shown that the human brain can store information for a short period of time even without neural ones Activity, namely in the so-called “internal variables” of neurons. The simulations suggest that a fatigue mechanism of a subset of neurons is essential for this short-term memory. »

Because these internal variables are not yet measurable, direct evidence is lacking, but the network only needs to test which neurons are currently fatigued to reconstruct the information previously processed. In other words, the previous information is stored in the non-activity of neurons, and the non-activity consumes the least energy.

Symbiosis of recurrent network and feed-forward

To do this, the researchers link two types of deep learning networks. Neural feedback networks are responsible for “short-term memory”. Many so-called recurrent modules of this type filter all relevant information from the input signal and store it. An anticipation network then determines which of the relationships found are very important for solving the task at hand. Pointless relationships are eliminated, neurons only fire in modules where relevant information has been found. This process ultimately leads to energy savings.

“Repeated neural structures should offer the greatest benefits for applications running on neuromorphic hardware in the future,” said Davies. “Neuromorphic hardware like Loihi is uniquely suited to enable the fast, sparse, and unpredictable patterns of network activity we see in the brain that the most power-efficient AI applications require. »

This research was financially supported by Intel and the European Human Brain Project, connecting neuroscience, medicine and brain-inspired technologies in the EU. To this end, the project creates a permanent digital research infrastructure, EBRAINS. This research work is anchored in the Areas of Expertisehuman and biotechnology and Information, communication and computerstwo of five fields of competence at Graz University of Technology.

Source of the story:

Materials provided by Graz University of Technology. Originally written by Christoph Pelzl. Note: Content can be edited for style and length.

#Significant #Energy #Savings #Neuromorphic #Hardware #News #Artificial #Intelligence #Robotics

Leave a Comment

Your email address will not be published.