期刊名称:ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
印刷版ISSN:2194-9042
电子版ISSN:2194-9050
出版年度:2003
卷号:XXXIV-5/W10
出版社:Copernicus Publications
摘要:In recent years, because cameras have become inexpensive and ever more prevalent, there has been increasing interest in modeling human shape and motion from monocular video streams. This, however, is an inherently difficult task, both because the body is very complex and because, without markers or targets, the data that can be extracted from images is often incomplete, noisy and ambiguous. For example, correspondence-based techniques are error-prone for this kind of application and tend to produce many false matches. In this paper, we discuss the use of bundle-adjustment techniques to address theses issues, and, more specifically, we demonstrate our ability to track 3D body motion from monocular video sequences. In earlier work, we have developed a robust method for rigid object monocular tracking and modeling. It relies on regularly sampling the 3-D model, projecting and tracking the samples in video sequences, and adjusting the motion and shape parameters to minimize a reprojection error. Here, we extend this approach to tracking the whole body represented by an articulated model. We introduce the appropriate degrees of freedom for all the relevant limbs and solve the resulting optimization problem. This scheme does not require a very precise initialization and we demonstrate its validity using both synthetic data and real sequences of a moving subject captured using a single static video camera
关键词:Vision; Bundle Adjustment; Monocular Body Tracking