Achieving Service Rate Objectives with Decay Usage Scheduling
Abstract
Decay usage scheduling is a priority- and usage- based approach to CPU allocation in which preference is given to processes that have consumed little CPU in the recent past. Although employed in many systems (e.g., UNIX™), decay usage scheduling has been poorly understood; as a result, it has been difficult to use such schedulers to achieve service rate objectives, such as ensuring that users receive a specific fraction of the CPU. In this paper, we develop an analytic model for decay usage schedulers running compute-bound workloads, such as those found in many engineering and scientific environments; the model is validated from measurements of a UNIX system. We use this model in two ways. First, we study how to parameterize decay usage schedulers to achieve a wide range of service rates. Doing so requires a fine granularity of control (the extent to which service rates can be allocated in small increments) and a large range of control (the ability to have large differences in the service rates). Our results show that, for a fixed representation of process priorities (e.g., eight bits, as used in many UNIX systems), a larger range of control makes the granularity of control coarser, and a finer granularity of control decreases the range of control. A second use of our analytic model is to construct a low overhead algorithm for achieving service rate objectives. Existing approaches require adding a feedback loop to the scheduler. We avoid this overhead by exploiting the feedback already present in decay usage schedulers. Using both empirical and analytical techniques, we demonstrate that our algorithm is effective and that it provides fairness when the system is over-or under-loaded. © 1993 IEEE