Efficient non-iterative domain adaptation of pedestrian detectors to video scenes

Kyaw Kyaw Htike and David Hogg, “Efficient non-iterative domain adaptation of pedestrian detectors to video scenes“, International Conference on Pattern Recognition (ICPR), IEEE, 2014, Stockholm, Sweden. (Oral presentation; acceptance rate = 14% out of 1409 submissions). DOI: 10.1109/ICPR.2014.123. [ISI and Scopus-indexed conference proceeding]


Pedestrian detection is an essential step in many important applications of Computer Vision. Most detectors require manually annotated ground-truth to train, the collection of which is labor intensive and time-consuming. Generally, this training data is from representative views of pedestrians captured from a variety of scenes. Unsurprisingly, the performance of a detector on a new scene can be improved by tailoring the detector to the specific viewpoint, background and imaging conditions of the scene. Unfortunately, for many applications it is not practical to acquire this scene-specific training data by hand. In this paper, we propose a novel algorithm to automatically adapt and tune a generic pedestrian detector to specific scenes which may possess different data distributions than the original dataset from which the detector was trained. Most state-of-the-art approaches can be inefficient, require manually set number of iterations to converge and some form of human intervention. Our algorithm is a step towards overcoming these problems and although simple to implement, our algorithm exceeds state-of-the-art performance.