Precision of synaptic weights programmed in phase-change memory devices for deep learning inference
Abstract
The precision at which the conductance states can be programmed and maintained over time is central to the operation of analog resistance-based memory devices such as phase-change memory (PCM) in in-memory computing applications such as deep learning and scientific computing. Iterative programming with closed loop feedback is the most common approach towards programming an array of devices to achieve the desired conductance values as stipulated by the application. In this work, we analytically derive the precision associated with the iterative programming scheme and show that it is fundamentally limited by the read noise. The estimated programming noise quantitatively matches that measured experimentally on >1k PCM device arrays incorporating two types of doped GST phase-change materials. We further demonstrate that the conductance drift driven divergence of the programmed conductance states depends on the time of feedback in the iterative programming. Moreover, we studied the impact of the inaccuracy associated with synaptic weight storage on deep learning inference. We demonstrate significant accuracy retention improvements on CIFAR-10, CIFAR-100, and PTB benchmarks by tuning the time of feedback when programming weights on PCM-based DNN inference hardware.