May 16, 2024

Researchers Develop Energy-Efficient Probabilistic Computers for Current AI Applications

Researchers from Tohoku University and the University of California, Santa Barbara, have developed a proof-of-concept for an energy-efficient computer compatible with current artificial intelligence (AI) technologies. The computer utilizes the stochastic behavior of nanoscale spintronics devices and is specifically designed for probabilistic computation problems such as inference and sampling.

As Moore’s Law begins to slow down, there is a growing need for domain-specific hardware. One such example is a probabilistic computer that incorporates probabilistic bits, or p-bits, which have inherently stochastic building blocks. These computers have the potential to efficiently tackle computationally complex tasks in machine learning and AI.

While quantum computers are well-suited for quantum problems, room-temperature probabilistic computers are ideal for algorithms that require probabilistic calculations. These algorithms are commonly used in machine learning training, optimization, and sampling.

Researchers from Tohoku University and the University of California Santa Barbara have demonstrated that robust and fully asynchronous probabilistic computers can be efficiently realized at scale using a spintronic device called stochastic magnetic tunnel junction (sMTJ) in combination with powerful Field Programmable Gate Arrays (FPGAs).

Previous sMTJ-based probabilistic computers were only capable of implementing recurrent neural networks, and the development of a scheme to implement feedforward neural networks has been eagerly awaited. Feedforward neural networks are fundamental to most modern AI applications, so enhancing probabilistic computers in this direction is crucial for market success and enhancing AI’s computational capabilities, according to Professor Kerem Camsari, the Principal Investigator at the University of California, Santa Barbara.

In a recent breakthrough to be presented at the 2023 International Electron Devices Meeting (IEDM), the researchers have made two significant advances. Firstly, building upon the Tohoku University team’s prior work on stochastic magnetic tunnel junctions at the device level, they demonstrated the fastest p-bits at the circuit level using in-plane sMTJs. These p-bits fluctuate every ~microsecond, which is approximately three orders of magnitude faster than previous reports.

Secondly, by implementing an update order at the computing hardware level and leveraging layer-by-layer parallelism, they showcased the basic operation of a Bayesian network as an example of feed forward stochastic neural networks.

While the current demonstrations are at a small scale, these designs can be scaled up using CMOS-compatible Magnetic RAM (MRAM) technology. This would enable significant advancements in machine learning applications and potentially allow for the efficient hardware realization of deep/convolutional neural networks, according to Professor Shunsuke Fukami, the principal investigator at Tohoku University.

 

Note:

1. Source: Coherent Market Insights, Public sources, Desk research

2. We have leveraged AI tools to mine information and compile it