## Description

1. [30 marks] Segmenting a Brain Magnetic Resonance (MR) Image.

Download the bias-corrupted and noise-corrupted magnitude-MR image of a human brain from:

assignmentSegmentBrain.mat.zip

The mat file also contains a binary (mask) image that separates the set of pixels within the brain

from those outside the brain.

Implement the algorithm (covered in class lectures) for segmenting the brain in 3 segments,

namely, (i) white matter, (ii) gray matter, and (iii) cerebrospinal fluid, using a modified fuzzy-cmeans (FCM) to estimate, and account for, the bias/inhomogeneity field in the brain MR

image.

Assume the number of classes K = 3.

Run the segmentation algorithm only on the image data inside the brain.

Manually tune values for (i) parameter q that controls the fuzziness of the segmentation and

(ii) neighborhood mask (size and values) that gives the weights wij (you may choose the weights

based on a Gaussian with mean as the center pixel of the mask; you should rescale the weight,

if needed, so that they sum to 1). You must choose q to be greater than 1.5.

You must choose the initial estimate for the bias field to be a constant image.

After finding the optimal estimates of the (i) class means ck for each class k, (ii) memberships

unk at each pixel n inside the brain, and (iii) bias-field values bn, construct the following images:

• Construct a bias-removed-image A as follows: at each pixel n, the intensity An in the

bias-removed image equals the weighted sum: An := P

k

unkck.

• Construct a residual-image R as follows: at each pixel n, the intensity Rn in the residual

image equals the difference: Rn := Yn − Anbn, where Y is the corrupted data image.

Implement the following functionality as part of the segmentation algorithm:

(a) (3 marks) Code to find the optimal value of the class means, within every iteration.

(b) (3 marks) Code to find the optimal value of the memberships, within every iteration.

(c) (3 marks) Code to find the optimal value of the bias field, within every iteration.

Report the following:

1

(a) (0 marks) The chosen value for q.

(b) (0 marks) The neighborhood mask wij seen as an image.

(c) (0 marks) The initial estimate for the membership values shown as 3 images, i.e., one image

that shows the membership values of all pixels to a particular class. Describe your motivation

and algorithm for choosing this initialization.

(d) (0 marks) The initial estimates of the class means. Describe your motivation and algorithm

for choosing this initialization.

(e) (6 marks) The value of the objective function at each iteration in the modified-FCM algorithm.

(f) (10 marks) Show the following 5 images in the report (i) Corrupted image provided, (ii) Optimal class-membership image estimates, (iii) Optimal bias-field image estimate (iv) Biasremoved image, (v) Residual image.

(g) (0 marks) The optimal estimates for the class means.

• (5 points) Explain if the formulation discussed in class leads to a unique solution. If not, (i) propose a scheme (in theory) to ensure a unique solution and (ii) implement it.

2. [25 marks] Segmenting a Brain Magnetic Resonance (MR) Image.

Download the corrupted magnitude-MR image of a human brain from: assignmentSegmentBrainGmmEmMrf.

mat.zip

The mat file also contains a binary (mask) image that separates the set of pixels within the brain

from those outside the brain.

Implement the algorithm (covered in class lectures) for segmenting the brain in 3 segments,

namely, (i) white matter, (ii) gray matter, and (iii) cerebrospinal fluid, using an expectationmaximization (EM) optimization algorithm that relies on a Gaussian mixture model (GMM)

for intensities and a Markov random field (MRF) model on the labels.

Assume the number of classes K = 3.

Run the segmentation algorithm only on the image data inside the brain.

Manually tune the β parameter value underlying the potential function in the MRF model on the

label image, to control the smoothness on the labeling (and memberships).

Implement the following functionality as part of the segmentation algorithm:

(a) (2 marks) Code to find the optimal value of the memberships, within every iteration.

(b) (2 marks) Code to find the optimal value of the class means, within every iteration.

(c) (2 marks) Code to find the optimal value of the class standard deviations, within every iteration.

(d) (6 marks) Code to find the optimal labeling, within every iteration, based on a modified

iterated-conditional-mode (ICM) optimization that updates all labels at once ensuring that

the posterior probability (computed upto the normalization constant Z; recall that, within any

iteration, Z will be a function of β as well as the Gaussians’ parameters) increases.

Report the following:

(a) (0 marks) The chosen value for β that, in your judgement, gives a smooth and realistic

segmentation.

2

(b) (0 marks) The initial estimate for the label image x. Describe your motivation and algorithm

for choosing this initialization.

(c) (0 marks) The initial estimates of the Gaussian parameters θ, i.e., the class means and

standard deviations. Describe your motivation and algorithm for choosing this initialization.

(d) (3 marks) Within every iteration, for the modified ICM segmentation, the values of the log

posterior probability for the labels, i.e., P(x|y, θ, β), before and after the ICM update.

(e) (10 marks) Show the following 5 images in the report (i) Corrupted image provided, (ii) Optimal class-membership image estimates for chosen β, (iii) Optimal label image estimate for

chosen β, (iv) Optimal class-membership image estimates β = 0, i.e., NO MRF prior on

labels, (v) Optimal label image estimate for β = 0, i.e., NO MRF prior on labels.

(f) (0 marks) The optimal estimates for the class means for the chosen β.

3. [25 marks] Consider that you are solving a tissue segmentation problem in MRI. Suppose you

have an expectation-maximization (EM) optimization framework for maximum-likelihood estimation of parameters θ underlying the image intensity model. How can you extend the EM framework if you have prior information on the parameters θ in the form of a probability distribution

P(θ) ?

• (5 marks) Clearly explain, with mathematical expressions, how the E step changes.

• (5 marks) Clearly explain, with mathematical expressions, how the M step changes.

• (12 marks: 6 + 9) Suppose you were using EM for fitting a Gaussian mixture model (GMM)

to the data, and you wanted to design priors on the parameters underlying the GMM, i.e., mean

vectors, covariance matrices, and weights.

Then, (i) design appropriate prior models on each of the three aforementioned kinds of parameter

and (ii) describe an EM algorithm for performing the parameter update in the M step for each kind

of parameter.

One hint: Cholesky decomposition.

3