nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation (2024)

References

  1. Falk, T. et al. U-net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67–70 (2019).

    Article CAS Google Scholar

  2. Hollon, T. C. et al. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nat. Med. 26, 52–58 (2020).

  3. Aerts, H. J. W. L. et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 5, 4006 (2014).

    Article CAS Google Scholar

  4. Nestle, U. et al. Comparison of different methods for delineation of 18F-FDG PET-positive tissue for target volume definition in radiotherapy of patients with non-small cell lung cancer. J. Nucl. Med. 46, 1342–1348 (2005).

    PubMed Google Scholar

  5. De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24, 1342–1350 (2018).

    Article Google Scholar

  6. Bernard, O. et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37, 2514–2525 (2018).

    Article Google Scholar

  7. Nikolov, S. et al. Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy. Preprint at https://arxiv.org/abs/1809.04430 (2018).

  8. Kickingereder, P. et al. Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study. Lancet Oncol. 20, 728–740 (2019).

    Article Google Scholar

  9. Maier-Hein, L. et al. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9, 5217 (2018).

    Article CAS Google Scholar

  10. Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017).

    Article Google Scholar

  11. LeCun, Y. 1.1 deep learning hardware: past, present, and future. In 2019 IEEE International Solid-State Circuits Conference 12–19 (IEEE, 2019).

  12. Hutter, F., Kotthoff, L. & Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges. (Springer Nature, 2019).

  13. Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012).

    Google Scholar

  14. Simpson, A. L. et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. Preprint at https://arxiv.org/abs/1902.09063 (2019).

  15. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In MICCAI (eds. Navab, N. et al) 234–241 (2015).

  16. Landman, B. et al. MICCAI multi-atlas labeling beyond the cranial vault—workshop and challenge. https://doi.org/10.7303/syn3193805 (2015).

  17. Litjens, G. et al. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med. Image Anal. 18, 359–373 (2014).

    Article Google Scholar

  18. Bilic, P. et al. The liver tumor segmentation benchmark (LiTS). Preprint at https://arxiv.org/abs/1901.04056 (2019).

  19. Carass, A. et al. Longitudinal multiple sclerosis lesion segmentation: resource and challenge. NeuroImage 148, 77–102 (2017).

    Article Google Scholar

  20. Kavur, A. E. et al. CHAOS challenge—combined (CT–MR) healthy abdominal organ segmentation. Preprint at https://arxiv.org/abs/2001.06535 (2020).

  21. Heller, N. et al. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. Preprint at https://arxiv.org/abs/1904.00445 (2019).

  22. Lambert, Z., Petitjean, C., Dubray, B. & Ruan, S. SegTHOR: segmentation of thoracic organs at risk in CT images. Preprint at https://arxiv.org/abs/1912.05950 (2019).

  23. Maška, M. et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics 30, 1609–1617 (2014).

    Article Google Scholar

  24. Ulman, V. et al. An objective comparison of cell-tracking algorithms. Nat. Methods 14, 1141–1152 (2017).

    Article CAS Google Scholar

  25. Heller, N. et al. The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the KiTS19 challenge. In Medical Image Analysis vol. 67 (2021).

  26. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-net: learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds. Ourselin, S. et al.) 424–432 (Springer, 2016).

  27. Milletari, F., Navab, N. & Ahmadi, S.-A. V-net: fully convolutional neural networks for volumetric medical image segmentation. In International Conference on 3D Vision (3DV) 565–571 (IEEE, 2016).

  28. He, K., Zhang, Z., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  29. Jégou, S., Drozdzal, M., Vazquez, D., Romero, A. & Bengio, Y. The one hundred layers tiramisu: fully convolutional DenseNets for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 11–19 (IEEE, 2017).

  30. Huang, G., Liu, Z., van der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 4700–4708 (IEEE, 2017).

  31. Oktay, O. et al. Attention U-net: learning where to look for the pancreas. Preprint at https://arxiv.org/abs/1804.03999 (2018).

  32. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K. & Yuille, A. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).

    Article Google Scholar

  33. McKinley, R., Meier, R. & Wiest, R. Ensembles of densely-connected CNNs with label-uncertainty for brain tumor segmentation. In International MICCAI Brain Lesion Workshop (eds. Crimi, A. et al.) 456–465 (Springer, 2018).

  34. Heinrich, L., Funke, J., Pape, C., Nunez-Iglesias, J. & Saalfeld, S. Synaptic cleft segmentation in non-isotropic volume electron microscopy of the complete Drosophila brain. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds. Frangi, A.F. et al.) 317–325 (Springer, 2018).

  35. Nolden, M. et al. The Medical Imaging Interaction Toolkit: challenges and advances. Int. J. Comput. Assist. Radiol. Surg. 8, 607–620 (2013).

    Article Google Scholar

  36. Castilla, C., Maška, M., Sorokin, D. V., Meijering, E. & Ortiz-de-Solórzano, C. 3-D quantification of filopodia in motile cancer cells. IEEE Trans. Med. Imaging 38, 862–872 (2018).

    Article Google Scholar

  37. Sorokin, D. V. et al. FiloGen: a model-based generator of synthetic 3-D time-lapse sequences of single motile cells with growing and branching filopodia. IEEE Trans. Med. Imaging 37, 2630–2641 (2018).

    Article Google Scholar

  38. Menze, B. H. et al. The Multimodal Brain Tumor Image Segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34, 1993–2024 (2014).

    Article Google Scholar

  39. Svoboda, D. & Ulman, V. MitoGen: a framework for generating 3D synthetic time-lapse sequences of cell populations in fluorescence microscopy. IEEE Trans. Med. Imaging 36, 310–321 (2016).

    Article Google Scholar

  40. Wu, Z., Shen, C. & van den Hengel, A. Bridging category-level and instance-level semantic image segmentation. Preprint at https://arxiv.org/abs/1605.06885 (2016).

  41. He, K., Zhang, X., Ren, S. & Sun, J. Identity mappings in deep residual networks. In European Conference on Computer Vision (eds. Sebe, N. et al.) 630–645 (Springer, 2016).

  42. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In 3rd International Conference on Learning Representations (eds. Bengio, Y. & LeCun, Y.) (ICLR, 2015).

  43. Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of Machine Learning Research Vol. 37 (eds. Francis Bach and David Blei) 448–456 (PMLR, 2015).

  44. Ulyanov, D., Vedaldi, A. & Lempitsky, V. Instance normalization: the missing ingredient for fast stylization. Preprint at https://arxiv.org/abs/1607.08022 (2016).

  45. Wiesenfarth, M. et al. Methods and open-source toolkit for analyzing and visualizing challenge results. Preprint at https://arxiv.org/abs/1910.05121 (2019).

  46. Hu, J., Shen, L. & Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 7132–7141 (IEEE, 2018).

  47. Wu, Y. & He, K. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) (eds. Leal-Taixé, L. & Roth, S.) 3–19 (ECCV, 2018).

  48. Singh, S. & Krishnan, S. Filter response normalization layer: eliminating batch dependence in the training of deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 11237–11246 (CVPR, 2020)

  49. Maas, A. L., Hannun, A. Y. & Ng, A. Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the International Conference on Machine Learning 3 (eds. Dasgupta, S. & McAllester, D.) (ICML, 2013).

  50. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2818–2826 (IEEE, 2016).

  51. Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S. & Pal, C. The importance of skip connections in biomedical image segmentation. In Deep Learning and Data Labeling for Medical Applications (eds. Carneiro, G. et al.) 179–187 (Springer, 2016).

  52. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (eds. Wallach, H. et al.) 8024–8035 (NeurIPS, 2019).

  53. Isensee, F. et al. Batchgenerators—a Python framework for data augmentation. Zenodo https://doi.org/10.5281/zenodo.3632567 (2020).

Download references

nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation (2024)

FAQs

Why is U-Net used for medical image segmentation? ›

U-NET architecture can be used for image localization, which helps in predicting the image pixel by pixel. It also achieves good performance on very different biomedical segmentation applications.

What is the U-Net method? ›

Description. The U-Net architecture stems from the so-called "fully convolutional network" proposed by Long, Shelhamer, and Darrell in 2014. The main idea is to supplement a usual contracting network by successive layers, where pooling operations are replaced by upsampling operators.

Is U-Net good for segmentation? ›

Now, what makes U-Net so good at image segmentation is skip connections and decoder networks. What we have done till now is similar to any CNN. The skip connections and decoder network separates the u net architecture from other CNNs.

What is nnU net? ›

nnU-Net is a semantic segmentation method that automatically adapts to a given dataset. It will analyze the provided training cases and automatically configure a matching U-Net-based segmentation pipeline.

What are the disadvantages of U-Net? ›

Disadvantages: A large number of parameters: UNet has many parameters due to the skip connections and the additional layers in the expanding path. This can make the model more prone to overfitting, especially when working with small datasets.

What is the difference between CNN and U-Net? ›

In CNN, the image is converted into a vector which is largely used in classification problems. But in U-Net, an image is converted into a vector and then the same mapping is used to convert it again to an image. This reduces the distortion by preserving the original structure of the image.

What are the advantages of using U-Net? ›

Another advantage is that it can capture both coarse and fine feature information, leading to improved segmentation performance. Additionally, using a parallel UNet architecture with a residual network can enhance the features of the segmented image through skip connections, further improving accuracy 4.

When to use U-Net? ›

UNET is frequently utilized for its accuracy in picture segmentation and has become a popular choice in various medical imaging applications. UNET combines an encoding path, also called the contracting path, with a decoding path called the expanding path.

Is U-Net supervised or unsupervised? ›

Is the U-net model under the supervised or unsupervised deep neural network? U-net is not a model, but rather an architecture. It can be used in both supervised (like CNNs for segmentation) or unsupervised (like GANs) models.

Why is U-Net so powerful? ›

Improved Feature Aggregation: By using nested skip pathways, UNet++ improves the aggregation of features from different levels of the network. This enables the model to handle objects and details at multiple scales, making it particularly effective in segmenting objects of different sizes.

What is better than U-Net? ›

4, the ENet, and BoxENet computational performance was up to 10 to 15 times higher than the UNet performance.

Can U-Net be used for image classification? ›

U-Net is developed for the task of semantic segmentation. When a neural network is fed images as inputs, we can choose to classify objects either generally or by instances.

What is nnU known for? ›

Northwest Nazarene University is consistently recognized for its excellent programs, distinguished faculty, and exceptional student outcomes by multiple organizations that independently evaluate the quality of colleges and universities throughout the nation.

What is U-Net in deep learning? ›

U-Net is a widely used deep learning architecture that was first introduced in the “U-Net: Convolutional Networks for Biomedical Image Segmentation” paper. The primary purpose of this architecture was to address the challenge of limited annotated data in the medical field.

Is nnU religious? ›

Founded in 1913, NNU is a Christian university of the liberal arts, professional programs and graduate studies. The University is grounded in the Wesleyan-Holiness tradition and is in partnership with the Church of the Nazarene, which emphasizes the biblical doctrines of perfect love and Christian holiness.

Can we use U-Net for image classification? ›

U-Net is developed for the task of semantic segmentation. When a neural network is fed images as inputs, we can choose to classify objects either generally or by instances.

Why is U-Net used for diffusion? ›

The network used in the paper Denoising Diffusion Probabilistic Models (DDPM) is based on the U-net architecture. The Unet network takes an image as an input, encodes it to some compressed hidden representation, and then decodes the compressed information back to an image.

What is the difference between FCN and U-Net? ›

U-Net combines the strengths of traditional FCNs with additional features that make it more effective for image segmentation tasks. The key difference between the two models is the symmetricity of the encoder and decoder portions of the network and the skip connections between them.

Top Articles
Latest Posts
Article information

Author: Greg Kuvalis

Last Updated:

Views: 5556

Rating: 4.4 / 5 (55 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Greg Kuvalis

Birthday: 1996-12-20

Address: 53157 Trantow Inlet, Townemouth, FL 92564-0267

Phone: +68218650356656

Job: IT Representative

Hobby: Knitting, Amateur radio, Skiing, Running, Mountain biking, Slacklining, Electronics

Introduction: My name is Greg Kuvalis, I am a witty, spotless, beautiful, charming, delightful, thankful, beautiful person who loves writing and wants to share my knowledge and understanding with you.