BraTs literature reading(Lee)
BraTs 18 leaderboard
1. 3D MRI Brain Tumor Segmentation Using Autoencoder Regularization(code)
Things new:
Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers.
Methods:
Things we learn:
Add the auto-encoder branch is to add additional guidance and regularization to the encoder part, since the training dataset size is limited. We follow the variational auto-encoder (VAE) approach to better cluster/group the features of the encoder endpoint.
For normalization, we use Group Normalization (GN), which shows better than BatchNorm performance when batch size is small.
We have also experimented with more sophisticated data augmentation techniques, including random histogram matching, affine image transforms, and random image filtering, which did not demonstrate any additional improvements.
We also use the spatial dropout with a rate of 0.2 after the initial encoder convolution. We have experimented with other placements of the dropout (including placing dropout layer after each convolution), but did not find any additional accuracy improvements.
We have tried several data post-processing techniques to fine tune the segmentation predictions with CRF [14], but did not find it beneficial (it helped for some images, but made some other image segmentation results worse).
Increasing the network depth further did not improve the performance, but increasing the network width (the number of features/filters) consistently improved the results.
Result:
Problems:
The size of model is too large, require 2 days of V100 32G to train with batchsize 1 (300 epochs)
Note:
The additional VAE branch helped to regularize the shared encoder (in presence of limited data), which not only improved the performance, but helped to consistently achieve good training accuracy for any random initialization
2. No-New Net(code)
Things new:
Focus on the training process arguing that a well trained U-Net is hard to beat.
Incorporating additional measures such as region based training, additional training data, a simple postprocessing technique and a combination of loss functions.
Optimize the training procedure to maximize its performance.
It uses instance normalization [23] and leaky ReLU nonlinearities and reduces the number of feature maps before upsampling.
Use a soft Dice loss for the training of our network
Methods:
Things we learn:
With MRI intensity values being non standardized, normalization is critical to allow for data from different institutes, scanners and acquired with varying pro- tocols to be processed by one single algorithm.
We normalize each modality of each patient independently by subtracting the mean and dividing by the stan- dard deviation of the brain region. The region outside the brain is set to 0. As opposed to normalizing the entire image including the background, this strategy will yield comparative intensity values within the brain region irrespective of the size of the background region around it.
For false postive, we replace all enhancing tumor voxels with necrosis if the total number of predicted enhancing tumor is less than some threshold.
Downsides of dice loss.
Result:
Problems:
Class imbalance. Small data, need to avoid overfitting. Could we use cascade model? Because there is one GT inside other GT
3. Ensembles of Densely-Connected CNNs with Label-Uncertainty for Brain Tumor Segmentation(code)
Things new:
- Densely connected blocks of dilated convolutions are embedded in a shallow U-net-style structure of down/upsampling and skip connections.
- Newly designed loss function which models label noise and uncer-tainty: Label-Uncertainty Loss and Focal Loss.
Methods:
Things we learn:
- Design new loss function for problem
- The raw values of MRI sequences cannot be compared across scanners and sequences, and therefore a homogenization is necessary across the training examples. In addition, learning in CNNs proceeds best when the inputs are standardized (i.e. mean zero, and unit variance).To this end, the nonzero intensities in the training, validation and testing sets were standardized, this being done across individual volumes rather than across the training set.
- The results of this skull-stripping vary.Other examples have remnants of the dura or optic nerves.(To combat this effect, we used a cascade of networks to first segment the parenchymia from the poorly skull-stripped images, followed by a second network which identifies the tumor compartments as above.)
Result:
Problems:
- Use focal loss to handle imbalance data
3. Learning Contextual and Attentive Information for Brain Tumor Segmentation(One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation)
BraTs 17
1. Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation
2. Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks
3. Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge
相關文章
- Redis Reading NotesRedis
- MySQL:Lost connection to MySQL server at 'readingMySqlServer
- Python Geospatial Development reading note(1)Pythondev
- Introhive宣佈任命Lee擔任執行長Hive
- [Paper Reading] DDIM: DENOISING DIFFUSION IMPLICIT MODELS
- [Paper Reading] Tesla AI Day for FSD BetaAI
- Reading and Understanding Systemstate Dumps (Doc ID 423153.1)
- Lost connection to MySQL server at 'reading authorization packet'MySqlServer
- Rust是JavaScript基礎設施的未來? – Lee RobinsonRustJavaScript
- Paper Reading: Random Balance ensembles for multiclass imbalance learningrandom
- [Paper Reading] Reconstructing Hands in 3D with TransformersStruct3DORM
- 3D Object Detection Essay Reading 2024.03.273DObject
- 3D Object Detection Essay Reading 2024.04.013DObject
- Vue 3 Reactivity System Source Code Reading: `markRaw`VueReact
- 《Windows 10 Control Flow Guard Internals》 Reading NotesWindows
- 關於寫文章發文章的Reading List
- Paper Reading: Cost-sensitive deep forest for price predictionREST
- [Paper Reading] VQ-VAE: Neural Discrete Representation Learning
- Source Code Reading for Vue 3: How does `hasChanged` work?Vue
- Paper Reading:A Survey of Deep Learning-based Object DetectionObject
- Reading Face, Read Health論文閱讀筆記筆記
- [Paper Reading] ControlNet: Adding Conditional Control to Text-to-Image Diffusion Models
- [Paper Reading] KOSMOS: Language Is Not All You Need: Aligning Perception with Language Models
- Proj. CDeepFuzz Paper Reading: Checker Bug Detection and Repair in Deep Learning LibrariesAI
- Paper Reading: Multi-class Imbalance Classification Based on Data Distribution and Adaptive WeightsAPT
- Paper Reading: Imbalanced ensemble learning leveraging a novel data-level diversity metric
- [Paper Reading] OFT Orthographic Feature Transform for Monocular 3D Object DetectionHOGORMMono3DObject
- ERROR 2013 (HY000): Lost connection to MySQL server at 'reading authorization paErrorMySqlServer
- 【一天一大 lee】N皇后 II (難度:困難) - Day20201017
- 【一天一大 lee】四數相加 II (難度:中等) - Day20201127
- Fundstrat的Tom Lee表示:儘管市場艱難,比特幣仍是最好的選擇比特幣
- Paper Reading: Combined Cleaning and Resampling algorithm for multi-class imbalanced data with label noiseGo
- [Paper Reading] GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion ModelsIDEGUI
- [Paper Reading] FlashOcc: Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height PluginASTPlugin
- Failed: error reading separator after document # 1: bad JSON array format - found no opening brackAIErrorJSONORM
- 【一天一大 lee】寶石與石頭 (難度:簡單) - Day20201002
- 在 fish 終端下報錯 source: Error while reading file “xxx” 等問題ErrorWhile
- Deep Reading | 從0到1再讀注意力機制,此文必收藏!