Project B1: Sparsity promoting likelihood penalization for inverse problems


Inverse problems consist in finding causes for observed effects. Typically the observed effects depend continuously on the causes, but not vice versa, i.e. inverse problems are ill-posed. To restore stability, a-priori information on the unknown and the data must be incorporated in the reconstruction process. In particular, incorporating information on sparsity of the unknown with respect to some given frame or on the distribution of the noise by log-likelihood data fidelity terms can have a remarkable effect on the quality of reconstructions.

In this project we study error bounds of sparsity promoting regularization (also in case of non-sparse solutions) and the modelling of noise by exponential families.

Methods: Besov spaces, wavelets, convex optimization, regularized Newton methods
Applications: biomedical and astronomical imaging