MPHY0041 Machine Learning in Medical Imaging

数学会就是会發表於2024-12-01

Assessed Coursework Tracking Sheet

Module Code: MPHY0041

Module Title :

Machine Learning in Medical Imaging

ate Handed out: Friday, October 25th 2024Student ID (Not Name) Submission Instruction: Before the submission deadline, you should digitally submit yoursource code and generated figures (a single jupyter notebook file including your writtenanswers). In case you submit multiple files, all files need to be combined in one single zip fileand submitted on the module page at UCL Moodle.

Coursework Deadline: Friday, November 29th 2024 at 16:00 at UCL Moodle

Date Returned to Student: The Department of Medical Physics and Biomedical Engineering follows the UCLAcademic Manual with regards to plagiarism and coursework late submission.UCL Policy on PlagiarismUCL Policy on Late Submission of CourseworkIf you are unable to submit on-time due to extenuating circumstances (EC), please referto the UCL Policy on Extenuating Circumstances and contact our EC Secretary atUCL Policy on Extenuating Circumstances

dback on:

Mark (%): Please note that the mark is provisional and could be changed when the exam boardsmeet to moderate marks.

UCL DEPARTMANT OF MEDICAL PHYSICS

AND BIOMEDICAL ENGINEERINGPlease note: This is an AI Category 1 coursework (i.e., AI technologies cannot be used to solve the questions): https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/usingai-tools-assessment. Please submit a single jupyter notebook file for Exercises 1, 2,and 3. The file should ontaincode, plots and comments that help the understanding of your answers. You can giveyourwritten answers as a Markdown within the jupyter notebook.The provided jupyter notebook Notebook_MPHY0041_2425_CW1.ipynb contains theindividual gap codes/functionsfor Exercise 2 and the functions provided for Exercise 3. Pleaseuse this notebook as the basis for your submission.

  1. Load the dataset ‘Dementia_train.csv’ it contains diagnosis (DX), a cognitive

score (ADAS13) and two cerebrospinal fluid (CSF) measurements for two proteins:amyloid and tau. There are three diagnostic labels: CN, MCI, and Dementia.

a) Remove MCI subjects from the dataset. Compute means for each of the thremeasurements (ADAS13, ABETA, TAU) for the ‘CN’ (?!") and the ‘Dementia’ (?#$)

groups. In addition, compute the standard deviation (?) for these three measurescross the diagnostic groups. Assume that the data follow a Gaussian distribution:,with the means and standard deviation as computed above. Compute the decisionboundary between the two disease groupsfor each of the three features(with theprior probabilities ?.! = ?#$ = 0.5).Load the dataset ‘Dementia_test.csv’ that contains the same informationfor another 400 participants. After removing people with MCI, use the decisionboundaries from above to compute accuracy, sensitivity and specificity代寫MPHY0041 Machine Learning in Medical Imaging forseparating CN from Dementia for each of the three features.[8]

  1. b) Using sklearn functions, train a LinearRegression to separate CN fromDementia subjects using ABETA and TAU values as inputs. Generate a scatter plotfor ABETA and TAU using different colours for the two diagnostic groups. Computethe decision boundary based on the linear regression and add it to the plot. Whatis the accuracy, sensitivity and specificity of your model on the test data forseparating CN from Dementia?[7]
  1. c) The previous analyses ignored the subjects with MCI. Going back to the fulldataset, compute means for all three groups for ABETA and TAU as well as thejoint variance-covariance matrix Σ. Use these to compute linear decision

boundaries between all pairs of classes (with the prior probabilities ?.! = ?/!0 =?#$ = 0.33) without using any models implemented in sklearn. Generate anew scatterplot and add the three decision boundaries. What is the accuracy,sensitivity and specificityfor separating CN from Dementia with this method?[10]2. Here we complete implementations for different machine learning algorithms. The codewith gaps can be found in the notebook Notebook_MPHY0041_2425_CW1.ipynb.

  1. a) The function fit_LogReg_IWLS contains a few gaps that need to be filled forthe function to work. This function implements Logistic Regression using iterativeweighted least squares (IWLS) as introduced in the lectures. Use your function totrain a model that separates Healthy controls from PD subjects in theLogReg_data.csv dataset (DX column indicates PD status, remaining columnsare the features). Use the LogisticRegression implemented in sklearnto train a model on the same data. Make a scatter plot between the coefficientsobtained from yourimplementation and the sklearn model. Comment on theresult.(Hint: The operator @ can be used for matrix multiplications; the functionnp.linalg.pinv() computes thepseudoinverse of the matrix: X-1 ).[7]

b) The function fit_LogReg_GRAD aims to implement Logistic Regression usinggradient descent. However, there are still a few gaps in the code. Complete thecomputation of the cost (J(β)) as well as the update of the beta coefficients.(Hint: gradient descent aims to minimise the cost; however, Logistic Regression isfitted by maximising the log likelihood). Use your function to train a model thatseparates Healthy controls from PD subjects in the LogReg_data.csv dataset.Run the training for 3000 iterations with ? = 0.1. Compare the obtainedcoefficients to the ones obtained from the IWLS implementation in part a).Comment on the result.[7]

  1. c) The function fit_LogReg_GRAD_momentum aims to implement LogisticRegression using gradient descent with momentum. Extend your solution from (b)and add momentum to the optimization as introduced in the lectures. Use theparameter gamma as the trade-off between momentum and gradient. Train yourmodel on the dataset Syn_Momentum.csv (two inputs X1, X2, and one targety). Run the gradient descent for 100 iterations and compare to the standardgradient descent from (b) also run for 100 iterations (both with ? = 0.001). Howdoes the Loss evolve over the iterations? Explain your observation.[7]
  1. d) When working with medical data we often encounter biases. This could mean thatour target variable (?) is accidentally correlated to another variable (?'). We wouldlike to estimate the model to predict ?, while ignoring the effects introduced by?'. The trade-off between the objectives can be modified using the parameter ?.Provide a Loss function for this scenario (where both ? and ?'are fitted using aogistic Regression). Complete the function fit_LogReg_GRAD_competing,which should implement hese logistic regressions with gradient descent. Use thevariable delta to implement the trade-off. Load the datasetsim_competitive.csv, it contains two input features (x1, x2) and two outputfeatures (y1, y2). Apply your function with different values for ? (0, 0.5, 0.75, 1.0).Make a scatter plot of the data and add the decision boundaries produced by thefour models.[9]3. This exercise uses T2-weighted MR images of the prostate and surrounding tissue(information here). The task to be solved is to automatically segment the prostate in theseimages. The input images are gray-scale images with 128x128 pixels (below left) andtheoutput should be a binary matrix of size 128x128, where a 1 indicates the prostate (belowright).The promise1215.zip archive contains three sets of images: training, validation, test.For training, there are 30 MR images paired with their ground truth (i.e., masks).Forinstance, train/img_02_15.png is the MRI and train/lab_02_15.png is thecorresponding ground truth. The function preprocess_img computes a series offilters (raw, sobel, gabor, difference of gaussians, etc.) to be used for the machine learningalgorithm. For instance, application to the above image results in the following channels(Figure 1). Use the function provided in create_training_set to randomly sample

1000 patches of size 21x21 from the 30 training images to generate an initial dataset. Theresulting dataset is heavily imbalanced (more background patches than target), thefunction sub_sample is used to generate a random subset of 1000 patches from theentiretraining data with an approximate 50-50 distribution.a) Using sklearn, train an SVC model to segment the prostate. Optimize kernelchoice (e.g., RBF or polynomial with degree 3) and the cost parameter (e.g., C inthe range 0.1to 1000) using an appropriate variant of cross-validation. Measureperformance usingthe Area Under the ROC Curve (roc_auc) and plot theperformance of the kernels depending on the C parameter. (Hint: when SVC seemsto take an endless time to train, then change your choice of C parameters; large Cparameters ® little regularization ® long training time. E.g., in Colab this tookabout 30 minutes).10]b) Based on your result from a) select the best model parameters and makpredictions of the 10 images in the validation dataset. Compute the DICEcoefficient and roc_auc for each image. Display the original image, the ground

truth, and your segmentations for any 5 images in your validation set. Provide theaverage DICE coefficient and roc_auc for the entire validation dataset. (Hint: thiscan take a few minutes per image.)[8]Figure 1: Feature channels. Numbered from top left to bottom right. (1) raw input image (2) Scharr filter, (3-6) Gabor filter with frequency 0.2 in four directions (7-10) Gabor filter with frequency 0.4 in four directions (11-14) Gabor filter with frequency 0.6 in four directions (15-18) Gabor filter with frequency 0.8 in four directions (19) Local Binary Pattern (LBP) features, and (20) difference of gaussians.

  1. c) Instead of the SVC, train a tree-based ensemble classifier and make predictions forthe validation images. Report the average roc_auc and DICE coefficient for theentire validation set. What performs better: the SVC or the tree ensemble? Aretree ensembles or the SVC faster to train and apply? Explain why this is the case.[7]
  1. d) Use the tree-based ensemble method and explore how the amount of trainingdata (i.e., sub sample size: 500, 1000, 2500, 5000), the patch dimensions (11x11,17x17, 21x21, 27x27, 31x31) affects the performance on the validation set. [10]
  1. e) As shown in the lectures, post-process your prediction using morphologicaoperations and filters to achieve a better segmentation result. (Hint: somemorphological operations are implemented in skimage.morphology; link).Report how your post-processing influences your DICE score on the validationdata.[5]
  1. f) Using your best combination of training data size and patch dimension (from d)and post processing methods (from e), estimate the performance on unseensamples from the test set. Display the original image, the ground truth, and yoursegmentations for any 5 images in your test set. Provide average DICE coefficientfor the entiretest set.[5]

相關文章