BraTs literature reading(Lee)

qq_37735698發表於2020-11-04

BraTs 18 leaderboard

1. 3D MRI Brain Tumor Segmentation Using Autoencoder Regularization(code)

Things new: 

Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers.

Methods:

Things we learn:

Add the auto-encoder branch is to add additional guidance and regularization to the encoder part, since the training dataset size is limited. We follow the variational auto-encoder (VAE) approach to better cluster/group the features of the encoder endpoint.

For normalization, we use Group Normalization (GN), which shows better than BatchNorm performance when batch size is small.

We have also experimented with more sophisticated data augmentation techniques, including random histogram matching, affine image transforms, and random image filtering, which did not demonstrate any additional improvements.

We also use the spatial dropout with a rate of 0.2 after the initial encoder convolution. We have experimented with other placements of the dropout (including placing dropout layer after each convolution), but did not find any additional accuracy improvements.

We have tried several data post-processing techniques to fine tune the segmentation predictions with CRF [14], but did not find it beneficial (it helped for some images, but made some other image segmentation results worse).

Increasing the network depth further did not improve the performance, but increasing the network width (the number of features/filters) consistently improved the results.

Result:

Problems:

The size of model is too large, require 2 days of V100 32G to train with batchsize 1 (300 epochs)

Note:

The additional VAE branch helped to regularize the shared encoder (in presence of limited data), which not only improved the performance, but helped to consistently achieve good training accuracy for any random initialization

 

2. No-New Net(code)

Things  new: 

Focus on the training process arguing that a well trained U-Net is hard to beat.

Incorporating additional measures such as region based training, additional training data, a simple postprocessing technique and a combination of loss functions.

Optimize the training procedure to maximize its performance.

It uses instance normalization [23] and leaky ReLU nonlinearities and reduces the number of feature maps before upsampling.

Use a soft Dice loss for the training of our network

Methods:

Things we learn:

With MRI intensity values being non standardized, normalization is critical to allow for data from different institutes, scanners and acquired with varying pro- tocols to be processed by one single algorithm.

We normalize each modality of each patient independently by subtracting the mean and dividing by the stan- dard deviation of the brain region. The region outside the brain is set to 0. As opposed to normalizing the entire image including the background, this strategy will yield comparative intensity values within the brain region irrespective of the size of the background region around it.

For false postive, we replace all enhancing tumor voxels with necrosis if the total number of predicted enhancing tumor is less than some threshold.

Downsides of dice loss.

Result:

Problems:

Class imbalance. Small data, need to avoid overfitting. Could we use cascade model? Because there is one GT inside other GT

 

3. Ensembles of Densely-Connected CNNs with Label-Uncertainty for Brain Tumor Segmentation(code)

Things new:

  • Densely connected blocks of dilated convolutions are embedded in a shallow U-net-style structure of down/upsampling and skip connections.
  • Newly designed loss function which models label noise and uncer-tainty: Label-Uncertainty Loss and Focal Loss.

Methods:

 

Things we learn:

  • Design new loss function for problem
  • The raw values of MRI sequences cannot be compared across scanners and sequences, and therefore a homogenization is necessary across the training examples. In addition, learning in CNNs proceeds best when the inputs are standardized (i.e. mean zero, and unit variance).To this end, the nonzero intensities in the training, validation and testing sets were standardized, this being done across individual volumes rather than across the training set.
  • The results of this skull-stripping vary.Other examples have remnants of the dura or optic nerves.(To combat this effect, we used a cascade of networks to first segment the parenchymia from the poorly skull-stripped images, followed by a second network which identifies the tumor compartments as above.)

Result:

Problems:

  • Use focal loss to handle imbalance data

 

3. Learning Contextual and Attentive Information for Brain Tumor Segmentation(One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation)

 

BraTs 17

1. Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

 

2. Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks

 

3. Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge

 

相關文章