Adaptive synthesis in progressive retrieval of audio-visual data
Abstract
With the advent of pervasive computing, a growing diversity of client devices is gaining access to audio-visual content. The increased variability in client device processing power, storage, band-width, and server loading require adaptive solutions for image, video and audio retrieval. Progressive retrieval is one prominent mode of access in which views at different resolutions are incrementally retrieved and refined over time. In this paper, we present a new framework for adaptively partitioning the synthesis operations in progressive retrieval of audio-visual signals. The framework considers that the server and client cooperate in synthesizing the views in order to best utilize the available processing power and bandwidth. We provide experimental results that demonstrate a significant reduction in latency in the progressive retrieval of images under different conditions of the client, server and network.