This paper develops a new approach to video denoising, in which motion estimation/compensation, temporal filtering, and spatial smoothing are all undertaken in the wavelet domain. The key to making this possible is the use of a shift-invariant, overcomplete wavelet transform, which allows motion between image frames to be manifested as an equivalent motion of coefficients in the wavelet domain. Our focus is on minimizing spatial blurring, restricting to temporal filtering when motion estimates are reliable, and spatially shrinking only insignificant coefficients when the motion is unreliable. Tests on standard video sequences show that our results yield comparable PSNR to the state of the art in the literature, but with considerably improved preservation of fine spatial details.