出版社:Academy & Industry Research Collaboration Center (AIRCC)
摘要:Object tracking, in general, is a promising technology that can be utilized in a wide variety of
applications. It is a challenging problem and its difficulties in tracking objects may fail when
confronted with challenging scenarios such as similar background color, occlusion,
illumination variation, or background clutter. A number of ongoing challenges still remain and
an improvement on accuracy can be obtained with additional processing of information. Hence,
utilizing depth information can potentially be exploited to boost the performance of traditional
object tracking algorithms. Therefore, a large trend in this paper is to integrate depth data with
other features in tracking to improve the performance of tracking algorithm and disambiguate
occlusions and overcome other challenges such as illumination artifacts. For this, we use the
basic idea of many trackers which consists of three main components of the reference model,
i.e., object modeling, object detection and localization, and model updating. However, there are
major improvements in our system. Our forth component, occlusion handling, utilizes the depth
spatiogram of target and occluder to localize the target and occluder. The proposed research
develops an efficient and robust way to keep tracking the object throughout video sequences in
the presence of significant appearance variations and severe occlusions. The proposed method
is evaluated on the Princeton RGBD tracking dataset and the obtained results demonstrate the
effectiveness of the proposed method.