出版社:The Institute of Image Information and Television Engineers
摘要:In this paper, we propose a novel method for generating arbitrarily focused disparity images that uses multiple differently focused images. Given the assumption that the depth of the scene changes stepwise, we derive a formula for the reconstruction between the desired arbitrarily focused disparity image and multiple acquired images; we can reconstruct the arbitrarily focused disparity image by using the formula iteratively. We show that we can reconstruct arbitrarily focused disparity images of natural scenes. Our method requires only the point spread functions of the acquired images for reconstruction, it does not require any spatial segmentation.