Yoram Bresler, Professor of Electrical and Computer Engineering and Bioengineering
University of Illinois at Urbana-Champaign
The sparsity of signals and images in a certain transform domain or dictionary has been exploited in many applications in signal and image processing, including compression, denoising, and notably in compressed sensing, which enables accurate reconstruction from undersampled data. These various applications used sparsifying transforms such as DCT, wavelets, curvelets, and finite differences, all of which had a fixed, analytical form. Recently, sparse representations that are directly adapted to the data have become popular, especially in applications such as image denoising, and inpainting.
We describe two contributions to this new framework. First, we describe a novel approach for simultaneously learning the dictionary and reconstructing the image from highly undersampled data. Numerical experiments on magnetic resonance images of several anatomies demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor compared to previous compressed sensing methods. Second, we describe a new formulation for data-driven learning of sparsifying transforms. While there has been extensive research on learning synthesis dictionaries and some recent work on learning analysis dictionaries, the idea of learning sparsifying transforms has received no attention. We show the superiority of our learned transforms over analytical sparsifying transforms such as the DCT for signal and image representation. We also show promising performance in image denoising using the learnt transforms, which compares favorably with approaches involving learnt synthesis dictionaries, but at orders of magnitude lower computational cost.