Fulfilling brain-inspired hyperdimensional computing with in-memory computing
Scientists around the world are inspired by the brain and strive to mimic its abilities in the development of technology. Our research team at IBM Research Europe in Zurich shares this fascination and took inspiration from the cerebral attributes of neuronal circuits like hyperdimensionality to create a novel in-memory hyperdimensional computing system.
Scientists around the world are inspired by the brain and strive to mimic its abilities in the development of technology. Our research team at IBM Research Europe in Zurich shares this fascination and took inspiration from the cerebral attributes of neuronal circuits like hyperdimensionality to create a novel in-memory hyperdimensional computing system.
The most efficient computer possible already exists. And no, it’s not a Mac or a PC — it’s the human brain. When computers were originally invented, they were designed around a brain model. At one time, many even referred to them as electronic brains. Indeed, it is certainly impressive how today’s computers can emulate single brain functions, such as learning and identifying visual objects or recognizing text and speech patterns. However, the evolution of computers has a long way to go to match the remarkable capacity of the human brain, which can learn and adapt without needing to be programed or updated, has intricately connected memory, doesn’t easily crash, and works in real time.
What’s more, computers are energy guzzlers. Our brain with all its magnificent capabilities operates below 20 watts while attending to a complex thought process. In comparison, a simple task like writing this blog post on a laptop requires about 80 watts. In terms of energy efficiency, our brain can even outperform state-of-the-art supercomputers by several orders of magnitude at only a fourth of the power.
Over the last decade, there has been significant advances in neurophysiology and brain theorizing to the extent that we now know more than ever about how the brain works. Neuroscientists have discovered that the mind operates by evaluating the state of thousands of synaptic connections at a time and computes with patterns of neural activity that are not readily associated with numbers. Drawing inspiration from this cerebral functionality, our research team set out to explore ways to move computing away from the conventional digital paradigm that we are used to.
Specifically, we focused on hyperdimensional computing (HDC),1 which is an emerging computational paradigm that aims to mimic attributes of the human brain’s neuronal circuits such as hyperdimensionality, fully distributed holographic representation, and (pseudo)randomness. What we discovered is that an HDC framework functions exceptionally well within an in-memory computing architecture. In fact, based on the results of our experiments in training and classifying datasets, HDC is a killer application for in-memory computing in many respects.
We believe our research, which is now being featured in the peer-reviewed journal Nature Electronics, will play an essential role in the advancement of next-generation AI hardware.
In contrast to the conventional von Neumann systems, which are digital and based on processing vectors having length of 32 or 64 bits, we wanted to create a computing paradigm that could potentially function more like a holistic network of neurons. Hence, we have a deep interest in hyperdimensional computing (HDC). The essence of HDC is the observation that key aspects of human memory, perception and cognition can be explained by the mathematical properties of hyperdimensional spaces comprised of hyperdimensional vectors, or hypervectors.
Put more clearly, HDC models neural activity patterns of the brain’s circuits, operating on a rich form of algebra that defines rules to build, bind, and bundle different hypervectors that are D-dimensional (pseudo)random with independent and identically distributed components as well as holographic representations. HDC represents data using bits in the order of thousands and distributes the related information across all bits evenly (i.e., equally significant bit). In such a computational framework, hypervectors representing different symbolic entities can be combined into new unique hypervectors to create representations for composite entities using well-defined vector space operations. These vector compositions create a powerful system of computing that can be used to perform, in addition to classical tasks, sophisticated cognitive tasks such as object detection, language and object recognition, voice and video classification, time series analysis, text categorization, and analytical reasoning.
There are many advantages to computing with hypervectors. For one, training algorithms in an HDC architecture is transparent, quicker and more efficient as object categories are learned in one shot of training from the available data. This beats other brain-inspired approaches such as neural networks, which require a large number of iterations for training. Moreover, this computing paradigm is memory-centric with parallel operations and is extremely robust against noise and variations or faulty components in a computer platform.
Indeed, HDC is the brainiest of approaches. However, we still need an efficient HDC processor that can fully support it. Hence, the current ongoing research effort focuses on both the algorithmic front as well as in building efficient computing substrates for HDC.
A key attribute of HDC, in terms of hardware realization, is its robustness to the imperfections associated with the computational substrates on which it is implemented. HDC also involves manipulation and comparison of large patterns within memory when used for machine learning tasks such as learning and classification. These two attributes make HDC particularly well-suited for emerging non-von Neumann computing paradigms such as in-memory computing where the physical attributes of nanoscale memory devices are exploited to perform computation in place.
In our research paper, we present a complete in-memory HDC system consisting of two main components: an HD encoder and an associative memory. The core computations are performed in-memory with logical and dot product operations on memristive devices. Due to the inherent robustness of HDC, it was possible to approximate the mathematical operations associated with HDC to make it suitable for hardware implementation, and to use analogue in-memory computing without compromising the accuracy of the output.
Using 760,000 phase-change memory devices performing analog in-memory computing, we experimentally demonstrate that such an HDC platform can achieve over 600% energy savings compared to optimized digital systems based on CMOS technology. Moreover, this first-of-its-kind prototype system is programed to support different hypervector representations, dimensionality, number of input symbols and of output classes to accommodate a variety of applications. In testing various in-memory logic operations, our architecture also attained comparable accuracy levels in three different learning tasks, including language classification, news classification and hand gesture recognition from electromyographic signals.
What distinguishes our work from other similar research is that we perform a complete end-to-end study also involving the synthesis of the digital peripheral submodules using 65 nm CMOS technology. Our study clearly shows the efficacy and potential of in-memory computing for this exciting new field.
This work was performed in collaboration with ETH Zürich and was supported in part by the European Research Council under grant no. 682675 and in part by the European Union’s Horizon 2020 Research and Innovation Program through the project MNEMOSENE under grant no. 780215.
Also contributing to the research: Manuel Le Gallo, Abbas Rahimi, Giovanni Cherubini, Luca Benini, Abu Sebastian
References
-
In-memory Hyperdimensional Computing, Geethan Karunaratne, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abbas Rahimi, and Abu Sebastian, Nature Electronics, DOI: 10.1038/s41928-020-0410-3 ↩