Erweiterte Suche

# Sparsity-constrained inversion with tomographic applications

Figure 1: Transversal slices from reconstructions of a 18F-FDG PET scan. left: "Ground truth": Standard inversion using 20 minutes of data, middle: Standard inversion using 5 seconds of data; right: Enhanced sparsity-based inversion using 5 seconds of data.

The trilateral Chinese-Finnish-German research project entitled "Sparsity-constrained inversion with tomographic applications", was founded as part of the "Inverse Problems Initiative", of the DFG and started in 2011. Its aim is to connect experts on Bayesian and deterministic regularization, which have so far been rather separate scientific communities. The project is coordinated by Professor Samuli Siltanen (Helsinki) and comprises four teams:
The project focuses on similarities and differences between Bayesian and deterministic approaches to sparsity-constrained regularization methods, both in discrete and continuous settings. Regarding computation, new, robust and efficient sparsity-promoting inversion algorithms will be developed and applied to bioluminescence and limited-data X-ray tomography.

## Scientific Background and Project-related Publications

Solving high-dimensional inverse problems using sparsity constraints as a priori information has led to enormous advances in various application areas. Total variation (TV) deblurring [2] uses sparsity constraints on the gradient of the unknown quantity and is used in many imaging applications with success (see figure 1). By the notion of compressed sensing [4], a number of techniques are summarized, which rely on the idea that high quality reconstructions can be obtained from a small amount of data, if a sparse basis for the unknowns is a priori known. Traditionally, sparsity constraints are formulated in the framework of variational regularization when introduced as a priori information to inverse problems. One popular approach is to use regularization functionals incorporating L1 norms. However, this type of sparsity constraints can also be formulated and examined in the framework of Bayesian statistics. Addressing high dimensional, ill-posed inverse problems as problems of Bayesian inference has gained growing attention over the years [11]. It allows an easy formulation of a priori information on the solution via a priori probability distributions (prior). Furthermore, a specific inference strategy that is called the maximum a posteriori estimate (MAP) corresponds to variational regularization (the prior corresponds to the regularization functional). In general, the Bayesian solution to an inverse problem is given by the a posteriori probability distribution (posterior) over the parameter space (the MAP estimate is the point maximizing this distribution). The analysis of sparse Bayesian inversion is far less elaborate up to now, and a number of exciting questions still remain to be addressed. In particular, sparse inversion is an interesting topic to study the relation between regularization theory and Bayesian inference. This relation is well understood for regularization using L2 norms, which corresponds to Bayesian inference with Gaussian priors. While the differences in these scenarios are subtle, they become way more pronounced in the context of sparse inversion using L1-type priors, i.e., priors, which rely on L1 norms of the unknowns. Selected publications: [14,13,12,1,16,19,7,8,18,10]

## A General Introduction

Inverse problems arise in the context of almost all modern imaging devices. Solving an inverse problem means to recover an interesting quantity from indirect and noisy measurements carried out by the imaging device. In a certain sense, the aim is to "invert", the process of imaging. A well-known example is given by Computed tomography (CT). For general introductions to inverse problems, we refer to [5,11]. For the following, consider a typical inverse problem which originates from an imaging application:

where
The inverse problem of recovering u from f is ill-posed and therefore, a stable reconstruction needs the incorporation of additional a priori information on the solution. In the framework of variational regularization, this incorporation is accomplished by defining the regularized solution as a minimizer of a functional:

In this formulation, $\| f - K \, u \|^{2}_{2}$ measures the misfit between measured data f and the data K u that a solution u would generate (data-fidelity), $\mathcal{J}(u)$ is a functional penalizing unwanted features of the solution u and, thus, represents our a priori constraints on the solution. The parameter $\alpha > 0$ (called regularization parameter) determines the balance between both terms.
Sparsity-constrained inversion refers to approaches in which $\mathcal{J}(u)$ aimes to measure the diversity of a certain property of u. Minimizing (1) will then lead to a solution $\hat{u}_\alpha$ which explains the data f while being sparse with respect to the property measured by $\mathcal{J}(u)$. A popular example is given by Total Variation (TV) deblurring [15,2], in which $\mathcal{J}(u)$ measures the L1 norm of the gradient of u:

Minimizing (2) using (3) leads to solutions which have only few jumps and is therefore refered to as edge-preserving reconstruction. In figure 1, different reconstructions of data from Positron emission tomography (PET) scans are shown. For an extremely small portion of the whole data, a TV-based reconstruction that was enhanced by Bregman iterations gives a result that is able to preserve the most distinct image features. Another popular example of using sparsity-constraints is given by compressed sensing [3,4], a notion which summarizes a number of techniques, which rely on the idea that high quality reconstructions can be obtained from a small amount of data, if a sparse basis or frame for the unknowns is a priori known. More general, one considers functionals of the form $\mathcal{J}(u) = \| \mathcal{L}u\|_p^p$, where the operator $\mathcal{L}$ may be the identity, a differential operator or a projection onto a basis or frame. Choosing p = 1 promotes sparsity on the solution, while $\mathcal{J}(u)$ is still convex and, thus, (2) has a unique solution.

The Bayesian approach to inverse problems takes a slightly different perspective on equation (1). The ill-posedness of recovering u from f means that a high uncertainty and under-determinateness about u remains after measuring f. In other words, the information gained by measurements alone is insufficient and unsuitable to determine a solution. By incorporating additional information about u that is known before the measurement (a priori information), it might be possible to reduce the total uncertainty about u sufficiently. The proper mathematical framework to deal with uncertainty and information is statistics, and the field that is concerned with the proper formulation and embedding of a priori information is called Bayesian statistics. Addressing high dimensional, ill-posed inverse problems as problems of Bayesian inference has gained growing attention over the years [17,6,11,9].

Formally, if we account for the stochastic nature of the noise term in (1), the equation turns into a relation between random variables, and the conditional probability density of f given u (called the likelihood density) can be computed:

Central to Bayesian inference strategies is to consider u as a random variable itself and to encode a priori information about it in its density, $p_{pr}(u)$, which is therefore called the prior. Then, the model can be inverted using Bayes' rule:

The conditional density of u given f is called the posterior, as it represents the complete information about u after the measurement. In Bayesian inference, this density is the complete solution to the inverse problem. There are several ways to exploit the information about u contained in the posterior. The most popular one, called the maximum a posteriori estimate (MAP), is to infer a point estimate for u by searching for the highest mode of the posterior. Another way to obtain a point estimate, called the conditional mean estimate (CM), is to compute the mean/expected value of the posterior: Practically, computing the MAP estimate is a high-dimensional optimization problem, whereas computing the CM estimate is a high-dimensional integration problem. Apart from point estimates, computing credible regions, conditional covariance or histogram estimates or extreme value probabilities are other applications of posterior-based inference (see [11] for an overview). A direct link to variational regularization can be established by choosing Gibbs distributions as priors:

Now, after suppressing terms not dependent on u, the MAP estimate is given by

which corresponds to (2).

Using sparsity constraints as a-priori information on the solution in the Bayesian approach becomes increaslingly popular as well. One approach to incorporate sparsity as a-priori information in the Bayesian framework is also rely on L1-type norms as well and examine their influence on different inference strategies (see, e.g., [14,13,12]). The analysis of sparse Bayesian inversion is far less elaborate up to now, and a number of exciting questions still remain to be addressed. In addition, sparsity constraints provide an interesting application to study the relation between variational regularization and Bayesian inference for inverse problems with sparsity constraints beyond the direct correspondance of MAP estimate and regularized solution. The above project tries to address some of these issues. The main applications are tomographic reconstruction problems.

## References:

1. C. Brune, A. Sawatzky, and M. Burger. Bregman-EM-TV Methods with Application to Optical Nanoscopy. In E. X.-C. T. et al. et al., editor, Proceedings of the 2nd International Conference on Scale Space and Variational Methods in Computer Vision, volume 5567 of LNCS, pages 235-246. Springer, april 2009.
2. M. Burger and S. Osher. A guide to the TV zoo. 2012.
3. D. L. Donoho. Compressed sensing. IEEE Trans Inf Theory, 52(4):1289{1306, 2006.
4. Y. C. Eldar and G. Kutyniok, editors. Compressed Sensing - Theory and Applications. Cambridge University Press, New York, 1st edition, May 2012.
5. H. Engl, M. Hanke-Bourgeois, and A. Neubauer. Regularization of Inverse Problems. Mathematics and Its Applications. Springer Netherland, Berlin, 1996.
6. S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell, PAMI-6(6):721-741, nov. 1984.
7. W. Han, J. A. Eichholz, J. Huang, and J. Lu. Rte-based bioluminescence tomography: A theoretical study. Inverse Problems in Science and Engineering, 19(4):435-459, 2011.
8. W. Han, J. Huang, and J. Eichholz. Discrete-ordinate discontinuous galerkin methods for solving the radiative transfer equation. SIAM Journal on Scienti c Computing, 32(2):477-497, 2010.
9. J. Idier. Bayesian Approach to Inverse Problems. Wiley-ISTE, 2008.
10. B. Jin, T. Khan, and P. Maass. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization. International Journal for Numerical Methods in Engineering, 89(3):337-353, 2012.
11. J. P. Kaipio and E. Somersalo. Statistical and Computational Inverse Problems, volume 160 of Applied Mathematical Sciences. Springer New York, 2005.
12. V. Kolehmainen, M. Lassas, K. Niinimäki, and S. Siltanen. Sparsity-promoting Bayesian inversion. Inverse Probl, 28(2):025005 (28pp), 2012.
13. M. Lassas, E. Saksman, and S. Siltanen. Discretization invariant Bayesian inversion and Besov space priors. Inverse Probl Imaging, (3):87-122, Jan 2009.
14. M. Lassas and S. Siltanen. Can one use total variation prior for edge-preserving Bayesian inversion? Inverse Probl, 20:1537-1563, 2004.
15. L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D Nonlinear Phenomena, 60:259-268, November 1992.
16. A. Sawatzky, C. Brune, J. Müller, and M. Burger. Total variation processing of images with poisson statistics. In E. X. Jiang and N. Petkov, editors, Proceedings of the 13th International Conference on Computer Analysis of Images and Patterns, volume 5702 of LNCS, pages 533-540. Springer, july 2009.
17. A. Tarantola and B. Valette. Inverse Problems = Quest for Information. J Geophys, 50:159{170, 1982.
18. X. Zhang, M. Burger, X. Bresson, and S. Osher. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. Technical Report 09-03, UCLA CAM-Report, 2009.
19. X. Zhang, M. Burger, and S. Osher. A uni ed primal-dual algorithm framework based on bregman iteration. Technical Report 09-99, UCLA CAM-Report, 2009.