Sale!

Assignment 3: CS 754, Advanced Image Processing solved

Original price was: $35.00.Current price is: $30.00. $25.50

Category:

Description

5/5 - (4 votes)

1. Your task here is to implement the ISTA algorithm for the following three cases:
(a) Consider the image from the homework folder. Add iid Gaussian noise of mean 0 and variance 3 (on a
[0,255] scale) to it, using the ‘randn’ function in MATLAB. Thus y = x + η where η ∼ N (0, 3) (earlier
the variance was mistakenly marked as 4). You should obtain x from y using the fact that patches
from x have a sparse or near-sparse representation in the 2D-DCT basis.
(b) Divide the image shared in the homework folder into patches of size 8 × 8. Let xi be the vectorized
version of the i
th patch. Consider the measurement yi = Φxi where Φ is a 32 × 64 matrix with entries
drawn iid from N (0, 1). Note that xi has a near-sparse representation in the 2D-DCT basis U which
is computed in MATLAB as ‘kron(dctmtx(8)’,dctmtx(8)’)’. In other words, xi = U θi where θi is a
near-sparse vector. Your job is to reconstruct each xi given yi and Φ using ISTA. Then you should
reconstruct the image by averaging the overlapping patches. You should choose the α parameter in the
ISTA algorithm judiciously. Choose λ = 1 (for a [0,255] image). Display the reconstructed image in
your report. State the RMSE given as kX(:)− Xˆ(:)k2/kX(:)k2 where Xˆ is the reconstructed image and
X is the true image. [15 points]
2. Download the book ‘Statistical Learning with Sparsity: The Lasso and Generalizations’ from https://web.
stanford.edu/~hastie/StatLearnSparsity_files/SLS_corrected_1.4.16.pdf, which is the website of
one of the authors. (The book can be officially downloaded from this online source). Your task is to trace
through the steps of the proof of Theorem 11.1(b). This theorem essentially derives error bounds on the
minimum of the following objective function: J(β) = 1
2N
ky − Xβk
2 + λN kβk1 where λN is a regularization
parameter, β ∈ R
p
is the unknown sparse signal, y = Xβ + w is a measurement vector with N values, w is
a zero-mean i.i.d. Gaussian noise vector whose each element has standard deviation σ and X ∈ R
N×p
is a
sensing matrix whose every column is unit normalized. This particular estimator (i.e. minimizer of J(x) for
x) is called the LASSO in the statistics literature. The theorem derives a statistical bound on λ also. Your
task is split up in the following manner:
(a) Define the restricted eigenvalue condition (the answer’s there in the book and you are allowed to read
it, but you also need to understand it).
(b) Starting from equation 11.20 on page 309 – explain why G(ˆv) ≤ G(0).
(c) Do the algebra to obtain equation 11.21.
1
(d) Do the algebra in more detail to obtain equation 11.22 (state the exact method of application of Holder’s
inequality – check the wiki article on it, if you want to find out what this inequality states).
(e) Derive equation 11.23.
(f) Assuming Lemma 11.1 is true and now that you have derived equation 11.23, complete the proof for
the final error bound for equation 11.14b.
(g) In which part of the proof does the bound λN ≥ 2
kXT wk∞
N
show up? Explain.
(h) Why is the cone constraint required? You may read the rest of the chapter to find the answer.
(i) Read example 11.1 which tells you how to put a tail bound on λN assuming that the noise vector w is
zero-mean Gaussian with standard deviation σ. Given this, state the advantages of this theorem over
Theorem 3 that we did in class. You may read parts of the rest of the chapter to answer this question.
What are the advantages of Theorem 3 over this particular theorem?
(j) Now read Theorem 1.10 till corollary 1.2 and comments on it concerning an estimator called the
‘Dantzig selector’, in the tutorial ‘Introduction to Compressed Sensing’ by Davenport, Duarte, Eldar
and Kuttyniok. You can find it here: http://www.ecs.umass.edu/~mduarte/images/IntroCS.pdf
or at https://webee.technion.ac.il/Sites/People/YoninaEldar/files/ddek.pdf. What is the
common thread between the bounds on the ‘Dantzig selector’ and the LASSO? [2 x 8 + 4 + 4 = 24
points]
3. In this task, you will you use the well-known package L1 LS from https://stanford.edu/~boyd/l1_ls/.
This package is often used for compressed sensing solution, but here you will use it for the purpose of
tomographic reconstruction. The homework folder contains images of two slices taken from an MR volume
of the brain. Create measurements by parallel beam tomographic projections at any 18 randomly angles
chosen from a uniform distribution on [0, π). Use the MATLAB function ‘radon’ for this purpose. Now
perform tomographic reconstruction using the following method: (a) filtered back-projection using the RamLak filter, as implemented in the ‘iradon’ function in MATLAB, (b) independent CS-based reconstruction
for each slice by solving an optimization problem of the form J(x) = ky − Axk
2 + λkxk1, (c) a coupled
CS-based reconstruction that takes into account the similarity of the two slices using the model given in the
lectures notes on tomography. For parts (b) and (c), use the aforementioned package from Stanford. For
part (c), make sure you use a different random set of 18 angles for each of the two slices. The tricky part
is careful creation of the forward model matrix A or a function handle representing that matrix, as well as
the corresponding adjoint operator AT
. Use the 2D-DCT basis for the image representation. Modify the
objective function from the lecture notes for the case of three similar slices. Carefully define all terms in
the equation but do not re-implement it. For ease of implementation, use square images. For this zero-pad
the original images to make them square-shaped before getting the radon projections. You can also specify
the output size in the iradon function. You may work with uniformly spaced angles instead of randomly
generated angles as the former can give better results. [3+7+8+7 = 25 points]
4. Here is our Google search question again. You know of the applications of tomography in medicine (CT
scanning) and virology/structural biology. Your job is to search for a journal paper from any other field
which requires the use of tomographic reconstruction (examples: seismology, agriculture, gemology). State
the title, venue and year of publication of the paper. State the mathematical problem defined in the paper.
Take care to explain the meaning of all key terms clearly. State the method of optimization that the paper
uses to solve the problem. [16 points]
5. Let Rθ(f) be the Radon transform of the image f(x, y) in the direction given by θ. Derive a formula for the
Radon transform of the scaled image f(ax, ay) where a 6= 0 is a scalar. [10 points]
6. Derive the Radon transform of the unit impulse δ(x, y) and the shifted unit impulse δ(x − x0, y − y0). [10
points]
2