Reconstructions

From the previous section about trajectories, we learned how to define trajectories for our x-ray ct setup. Now, we have the sinogram with a trajectory defined, and we want to find a solution to our problem. This guide, will discuss a variety of different algorithms to find a solution using elsa.

Phantom setup

We’ll run through the setup only really quick. It is similar to the previous ones, so we’ll put it here for completeness.

import numpy as np
import pyelsa as elsa
import matplotlib.pyplot as plt

# define Shepp-Logan phantom
shape = np.array([64] * 3)
phantom = elsa.phantoms.modifiedSheppLogan(shape)
vol_desc = phantom.getDataDescriptor()
phantom = np.asarray(phantom)

# Define geometry
nangles = 720
arc = 360
DSD = shape[0] * 100
DSO = shape[0] * 2
angles = np.linspace(0, arc, nangles, endpoint=False)
sino_desc = elsa.CircleTrajectoryGenerator.trajectoryFromAngles(
    angles, vol_desc, DSD, DSO)

Measurements

Either you are having some measurements, or if you are using the above setup, you can create measurements analytically in elsa. It’s simply:

b = elsa.analyticalSheppLogan(vol_desc, sino_desc)

This will compute a measurement according to the given trajectory for the well known Shepp-Logan phantom. Note, that this is not an approximation, but as we know the Shepp-Logan phantom is built from a couple of ellipsoids. We can use this information, to compute the intersection length for each ray in the setup for each ellipsoid.

Iterative reconstruction algorithms

Iterative reconstruction algorithms are usually all solving an optimization problem:

\[\arg\min_{\mathbf{x}} f(x)\]

In this quickstart we are looking at a couple of powerful algorithms, that all solve the above problem in some form or the other.

CGLS

The conjugate gradient algorithm is a powerful gradient based algorithm. It is well known in the X-ray CT community for its quick initial convergence. So, running a couple of iterations to reach a reasonable result.

CGLS can find a solution to the following problem:

\[\arg\min_{\mathbf{x}} \frac{1}{2} || \mathbf{A} \mathbf{x} - \mathbf{b} ||_2^2\]

Here, $\mathbf{a}$ is the so called system matrix. In elsa you can define the system matrix as:

A = elsa.JosephsMethodCUDA(vol_desc, sino_desc)

This is an approximation to the actual forward and backward projection operator. $\mathbf{b}$ is the measurement data. The following data constructs an object, that is capable of running the CGLS algorithm, and then we run it for 20 iterations.

cgls = elsa.CGLS(A, b)
sol1 = cgls.solve(20)

APGD

The accelerated proximal gradient descent algorithm is also known as fast iterative shrinkage-thresholding (FISTA). It is known as proximal or splitting method. These classes of algorithm solve the following problem:

\[\arg\min_{\mathbf{x}} g(x) + h(x)\]

$g$ is a differentiable function, e.g. the least squares. $h$ is some non-differentiable function. Two common choices are an indicator function, or the $\ell 1$-norm. The indicator function, specifically the indicator function for the non-negativity set is a very good assumption for X-ray CT. As no negative absorption coefficient can happen, it is a realistic assumption.

For the following use case, we will use the non-negativity constraint using the APGD.

L = elsa.powerIterations(elsa.adjoint(A) * A) * 1.1
h = elsa.IndicatorNonNegativity(vol_desc)
apgd = elsa.APGD(A, b, h, mu=1/L)
sol2 = apgd.solve(20)

Here, we first compute the Lipschitz constanst via the power iterations, this isn’t strictly necessary, but this way we gain a little more control about the step length. Next, we construct the indicator function. Then again, we create the object and run 20 iterations of it.