Neural networks are a powerful artificial intelligence (AI) method inspired by the structure of the human brain. These networks can be trained to perform remarkable tasks like speech recognition and image classification, but they learn much slower and consume far more energy than real brains do. One of the main reasons for these limitations is that the brain handles memory very differently than conventional computer hardware. In real brains, memory is stored locally, in the strengths of the synaptic connections between individual neurons. In contemporary computer chips, memory is stored externally in the form of RAM, which must be accessed every time a calculation is performed. Even for very small simulated networks, the cost of moving information between the central processing unit (the simulated neurons) and the external memory unit (the simulated synapses) adds up remarkably fast. 

Recently, a group of researchers at IBM in San Jose, California debuted a new brain-inspired microchip that stores memory more like neurons do. The chip – dubbed “NorthPole” – contains 256 separate computing units, each with its own memory, that are connected in a way that mimics connections between parts of the human brain. This novel arrangement, in combination with other innovations, makes the NorthPole more than 20 times faster and approximately 25 times more energy efficient than any other chip on the market today. 

The gains achieved by the NorthPole and other brain-inspired chips are likely to change the way we think about computer architecture, which has not changed much since the concept of a central processor and memory unit was first conceived by the mathematician John von Neumann in the mid-1940s. Although NorthPole lacks the memory to train very large language models like those used by ChatGPT, the chip has immediate applications in things like autonomous vehicles and event-based cameras, which capture video only when significant changes (such as movement) are detected.

This research was led by Dharmendra S. Modha at the IBM Research Center in San Jose, CA.

Managing Correspondent: Alexandra Hartman

Press Article: ‘Mind-blowing’ IBM chip speeds up AI (Nature News)

Original Journal Article: Neural inference at the frontier of energy, space, and time (Science)

Image Credit: Pexels/Martina Stiftinger/Google DeepMind

One thought on “Neurons inspire a more efficient computer chip for AI

  1. Thanks for the article! Traditional computer chips, while powerful, are not as energy-efficient or versatile as biological neurons. Neurons in the human brain can process complex information in parallel, adapt to new tasks, and consume very little energy compared to conventional electronic circuits. By drawing inspiration from the brain’s energy-efficient neural networks, this new chip has the potential to significantly reduce power consumption compared to traditional AI hardware.

Leave a Reply

Your email address will not be published. Required fields are marked *