Abstract
We have study a framework for the de-noising of videos which is jointly corrupted by random noise and fixed-pattern noise. Our approach is based on motion-compensated 3-D spatiotemporal volumes, i.e. a sequence of 2-D square patches extracted along the motion trajectories of the noisy video. It realize using following steps: 3D transformation of 3D group (grouping similar 2D image blocks into 3D data arrays which we call "groups") and then the coefficients of the 3-D volume spectrum are shrunk using an adaptive 3-D threshold array. Such array depends on the particular motion trajectory of the volume, the individual power spectral densities of the random and fixed-pattern noise, and also the noise variances which are adaptively estimated in transform domain. Simulation result is obtained by using DST, DCT, and Hadamard transform.