BraTs literature reading(Lee)
BraTs 18 leaderboard
1. 3D MRI Brain Tumor Segmentation Using Autoencoder Regularization(code)
Things new:
Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers.
Methods:
Things we learn:
Add the auto-encoder branch is to add additional guidance and regularization to the encoder part, since the training dataset size is limited. We follow the variational auto-encoder (VAE) approach to better cluster/group the features of the encoder endpoint.
For normalization, we use Group Normalization (GN), which shows better than BatchNorm performance when batch size is small.
We have also experimented with more sophisticated data augmentation techniques, including random histogram matching, affine image transforms, and random image filtering, which did not demonstrate any additional improvements.
We also use the spatial dropout with a rate of 0.2 after the initial encoder convolution. We have experimented with other placements of the dropout (including placing dropout layer after each convolution), but did not find any additional accuracy improvements.
We have tried several data post-processing techniques to fine tune the segmentation predictions with CRF [14], but did not find it beneficial (it helped for some images, but made some other image segmentation results worse).
Increasing the network depth further did not improve the performance, but increasing the network width (the number of features/filters) consistently improved the results.
Result:
Problems:
The size of model is too large, require 2 days of V100 32G to train with batchsize 1 (300 epochs)
Note:
The additional VAE branch helped to regularize the shared encoder (in presence of limited data), which not only improved the performance, but helped to consistently achieve good training accuracy for any random initialization
2. No-New Net(code)
Things new:
Focus on the training process arguing that a well trained U-Net is hard to beat.
Incorporating additional measures such as region based training, additional training data, a simple postprocessing technique and a combination of loss functions.
Optimize the training procedure to maximize its performance.
It uses instance normalization [23] and leaky ReLU nonlinearities and reduces the number of feature maps before upsampling.
Use a soft Dice loss for the training of our network
Methods:
Things we learn:
With MRI intensity values being non standardized, normalization is critical to allow for data from different institutes, scanners and acquired with varying pro- tocols to be processed by one single algorithm.
We normalize each modality of each patient independently by subtracting the mean and dividing by the stan- dard deviation of the brain region. The region outside the brain is set to 0. As opposed to normalizing the entire image including the background, this strategy will yield comparative intensity values within the brain region irrespective of the size of the background region around it.
For false postive, we replace all enhancing tumor voxels with necrosis if the total number of predicted enhancing tumor is less than some threshold.
Downsides of dice loss.
Result:
Problems:
Class imbalance. Small data, need to avoid overfitting. Could we use cascade model? Because there is one GT inside other GT
3. Ensembles of Densely-Connected CNNs with Label-Uncertainty for Brain Tumor Segmentation(code)
Things new:
- Densely connected blocks of dilated convolutions are embedded in a shallow U-net-style structure of down/upsampling and skip connections.
- Newly designed loss function which models label noise and uncer-tainty: Label-Uncertainty Loss and Focal Loss.
Methods:
Things we learn:
- Design new loss function for problem
- The raw values of MRI sequences cannot be compared across scanners and sequences, and therefore a homogenization is necessary across the training examples. In addition, learning in CNNs proceeds best when the inputs are standardized (i.e. mean zero, and unit variance).To this end, the nonzero intensities in the training, validation and testing sets were standardized, this being done across individual volumes rather than across the training set.
- The results of this skull-stripping vary.Other examples have remnants of the dura or optic nerves.(To combat this effect, we used a cascade of networks to first segment the parenchymia from the poorly skull-stripped images, followed by a second network which identifies the tumor compartments as above.)
Result:
Problems:
- Use focal loss to handle imbalance data
3. Learning Contextual and Attentive Information for Brain Tumor Segmentation(One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation)
BraTs 17
1. Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation
2. Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks
3. Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge
相關文章
- reading input to shell
- Introhive宣佈任命Lee擔任執行長Hive
- MySQL:Lost connection to MySQL server at 'readingMySqlServer
- Python Geospatial Development reading note(1)Pythondev
- Got timeout reading communication packetsGo
- Reading Response about Algorithm 4thGo
- Reading privileged memory with a side-channelIDE
- Error reading master configurationErrorAST
- Cloud Design Patterns Book Reading(undone)Cloud
- Lost connection to MySQL server at 'reading authorization packet'MySqlServer
- Vue 3 Reactivity System Source Code Reading: `markRaw`VueReact
- MySQL ERROR Got an error reading communication packetsMySqlErrorGo
- #Paper Reading# Dual Learning for Machine TranslationMac
- Failed reading log event, reconnecting to retryAI
- Rust是JavaScript基礎設施的未來? – Lee RobinsonRustJavaScript
- Source Code Reading for Vue 3: How does `hasChanged` work?Vue
- [Paper Reading] DDIM: DENOISING DIFFUSION IMPLICIT MODELS
- FUNDSTRAT分析師TOM LEE:比特幣價格下跌是“很健康的”比特幣
- 《Usermod:user lee is currently logged in 家目錄不能改變解決方法》
- 【Q.Lee.lulu】ASP.NET MVC: 修改ViewLocator來動態切換模板ASP.NETMVCView
- Recover 4 all version 0.3 破解(Winter Lee原著,sunwk補充) (2千字)
- Paper Reading:A Survey of Deep Learning-based Object DetectionObject
- WARN mapred.JobClient: Error reading task outputNo route to hostclientError
- [Paper Reading] VQ-VAE: Neural Discrete Representation Learning
- Dou Lee的執行緒併發包已經加入J2SE 1.5執行緒
- 《Windows 10 Control Flow Guard Internals》 Reading NotesWindows
- Reading Face, Read Health論文閱讀筆記筆記
- Oracle Reading List國外oracle經典書籍bookOracle
- 3D Object Detection Essay Reading 2024.03.273DObject
- 3D Object Detection Essay Reading 2024.04.013DObject
- Fundstrat的Tom Lee表示:儘管市場艱難,比特幣仍是最好的選擇比特幣
- 微軟全球資深副總裁Peter Lee和我們聊了聊微軟神秘部門NExT微軟
- [Paper Reading] OFT Orthographic Feature Transform for Monocular 3D Object DetectionHOGORMMono3DObject
- 微軟全球資深副總裁Peter Lee和我們聊了聊微軟神祕部門NExT微軟
- Percona MySQL 日誌中出現大量Aborted connection (Got an error reading)MySqlGoError
- MySQL特異功能之:Impossible WHERE noticed after reading const tablesMySql
- ERROR 2013 (HY000): Lost connection to MySQL server at 'reading authorization paErrorMySqlServer
- Reading Club | 演算法和人生抉擇:午飯到底吃什麼?演算法