A sparse deep feature representation for object detection from wearable cameras
Abstract
We propose a novel sparse feature representation for the faster RCNN framework and apply it for object detection from wearable cameras. Two main ideas, sparse convolution and sparse ROI pooling, are developed to reduce model complexity as well as computational cost. Sparse convolution approximates a full kernel by skipping weights in the kernel while sparse ROI pooling performs feature dimensionality reduction on the ROI pooling layer by skipping odd-indexed or even-indexed features. We demonstrate the effectiveness of our approach on two challenging body camera datasets including realistic police-generated clips. Our approach achieves a significant reduction of model size by a factor of over 10× as well as a computational speedup of about 2×, yet without compromising much detection accuracy compared to a VGG16-based baseline detector.