Programming Weights to Analog In-Memory Computing Cores by Direct Minimization of the Matrix-Vector Multiplication Error
Abstract
Accurate programming of non-volatile memory (NVM) devices in analog in-memory computing (AIMC) cores is critical to achieve high matrix-vector multiplication (MVM) accuracy during deep learning inference workloads. In this paper, we propose a novel programming approach that directly minimizes the MVM error by performing stochastic gradient descent optimization with synthetic random input data. The MVM error is significantly reduced compared to the conventional unit-cell by unit-cell iterative programming. We demonstrate that the optimal hyperparameters in our method are agnostic to the weights being programmed, enabling large-scale deployment across multiple AIMC cores without further fine tuning. It also eliminates the need for high-resolution analog to digital converters (ADCs) to decipher the small unit-cell conductance during programming. We experimentally validate this approach by demonstrating an inference accuracy increase of 1.26% on ResNet-9. The experiments were performed using phase change memory (PCM)-based AIMC cores fabricated in 14nm CMOS technology.