Quick estimation of data compression and de-duplication for large storage systems
Abstract
Many new storage systems provide some form of data reduction. In a recent paper we investigate how compression and de-duplication can be mixed in primary storage systems serving active data. In this paper we try to answer the question someone would ask before upgrading to a new, data reduction enabled storage server: how much storage savings the new system would offer for the data I have stored right now? We investigate methods to quickly estimate the storage savings potential of customary data reduction methods used in storage systems: compression and full file de-duplication on large scale storage systems. We show that the compression ratio achievable on a large storage system can be precisely estimated with just couple percents (worst case) of the work required to compress each file in the system. Also, we show that full file duplicates can be discovered very quickly with only 4% error (worst case) by a robust heuristic. © 2011 IEEE.