摘要:AbstractThis paper proposes a method of reconstructing the dense structure of scenes from visual or depth sensors that provably converges in finite time. We represent the scene as a superlevel set of a function that resides within some potentially infinite-dimensional function space. The observer state is determined by the parameters of the function that represents the scene. Preliminary experiments show that the observer exhibits convergence behaviour on a variety of different function spaces both in simulation and with real light-field camera data.