Skip to main content

MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13435))

  • 9979 Accesses

Abstract

Convolutional neural networks (CNNs) have achieved remarkable segmentation accuracy on benchmark datasets where training and test sets are from the same domain, yet their performance can degrade significantly on unseen domains, which hinders the deployment of CNNs in many clinical scenarios. Most existing works improve model out-of-domain (OOD) robustness by collecting multi-domain datasets for training, which is expensive and may not always be feasible due to privacy and logistical issues. In this work, we focus on improving model robustness using a single-domain dataset only. We propose a novel data augmentation framework called MaxStyle, which maximizes the effectiveness of style augmentation for model OOD performance. It attaches an auxiliary style-augmented image decoder to a segmentation network for robust feature learning and data augmentation. Importantly, MaxStyle augments data with improved image style diversity and hardness, by expanding the style space with noise and searching for the worst-case style composition of latent features via adversarial training. With extensive experiments on multiple public cardiac and prostate MR datasets, we demonstrate that MaxStyle leads to significantly improved out-of-distribution robustness against unseen corruptions as well as common distribution shifts across multiple, different, unseen sites and unknown image sequences under both low- and high-training data settings. The code can be found at https://github.com/cherise215/MaxStyle.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The re-parameterization trick is applied here for ease of follow-up optimization.

  2. 2.

    For simplicity, we omit non-learnable parameters such as the sampling operator to choose instance \(\boldsymbol{x}_j\) from a batch for style mixing.

References

  1. Tao, Q., et al.: Deep learning-based method for fully automatic quantification of left ventricle function from cine MR images: a multivendor, multicenter study. Radiology 290(1), 180–513 (2019)

    Article  Google Scholar 

  2. Liu, Q., et al.: Ms-net: multi-site network for improving prostate segmentation with heterogeneous MRI data. In: TMI (2020)

    Google Scholar 

  3. Liu, Q., Dou, Q., Heng, P.-A.: Shape-aware meta-learning for generalizing prostate MRI segmentation to unseen domains. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 475–485. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_46

  4. Dou, Q., et al.: Domain generalization via model-agnostic learning of semantic features. In: NeurIPS 2019, pp. 6447–6458 (2019)

    Google Scholar 

  5. Wang, J., et al.: Generalizing to unseen domains: a survey on domain generalization. In: Zhou, Z. (ed.) IJCAI 2021, pp. 4627–4635 (2021)

    Google Scholar 

  6. Xu, Z., et al.: Robust and generalizable visual representation learning via random convolutions. In: ICLR 2021 (2021)

    Google Scholar 

  7. Chen, C., et al.: Realistic adversarial data augmentation for MR image segmentation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 667–677. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_65

  8. Chen, C., et al.: Cooperative training and latent space data augmentation for robust medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 149–159. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_14

  9. Zhou, K., et al.: Domain generalization with mixstyle. In: ICLR 2021 (2021)

    Google Scholar 

  10. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. TPAMI 43(12), 4217–4228 (2021)

    Article  Google Scholar 

  11. Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 179–196. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_11

    Chapter  Google Scholar 

  12. Li, Y., et al.: Adaptive batch normalization for practical domain adaptation. Pattern Recogn. 80, 109–117 (2018)

    Article  Google Scholar 

  13. Jackson, P.T.G., et al.: Style augmentation: data augmentation via style randomization. In: CVPR Workshops 2019, pp. 83–92 (2019)

    Google Scholar 

  14. Yamashita, R., et al.: Learning domain-agnostic visual representation for computational pathology using medically-irrelevant style transfer augmentation. In: IEEE Transactions on Medical Imaging (2021)

    Google Scholar 

  15. Li, X., et al.: Uncertainty modeling for out-of-distribution generalization. In: International Conference on Learning Representations (2022)

    Google Scholar 

  16. Wagner, S.J., et al.: Structure-preserving multi-domain stain color augmentation using style-transfer with disentangled representations. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12908, pp. 257–266. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87237-3_25

  17. Zhong, Z., et al.: Adversarial style augmentation for domain generalized urban-scene segmentation. Under Review (2021)

    Google Scholar 

  18. Madry, A., et al.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2017)

    Google Scholar 

  19. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR 2015 (2015)

    Google Scholar 

  20. Gilmer, J., et al.: Adversarial examples are a natural consequence of test error in noise. ICML 97(2019), 2280–2289 (2019)

    Google Scholar 

  21. Xie, C., et al.: Adversarial examples improve image recognition. In: CVPR 2020, pp. 816–825 (2020)

    Google Scholar 

  22. Volpi, R., et al.: Generalizing to unseen domains via adversarial data augmentation. In: NeurIPS 2018, pp. 5339–5349 (2018)

    Google Scholar 

  23. Qiao, F., Zhao, L., Peng, X.: Learning to learn single domain generalization. In: CVPR 2020, pp. 12556–12565 (2020)

    Google Scholar 

  24. Miyato, T., et al.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. In: TPAMI (2018)

    Google Scholar 

  25. Huang, Z., Wang, H., Xing, E.P., Huang, D.: Self-challenging improves cross-domain generalization. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 124–140. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_8

  26. Bernard, O., et al.: Deep learning techniques for automatic MRI cardiac Multi-Structures segmentation and diagnosis: is the problem solved? TMI 37(11), 2514–2525 (2018)

    Google Scholar 

  27. Pérez-García, F., Sparks, R., Ourselin, S.: Torchio: a python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. In: Computer Methods and Programs in Biomedicine, p. 106236 (2021)

    Google Scholar 

  28. Campello, V.M., et al.: Multi-Centre. The M &Ms Challenge. In: IEEE Transactions on Medical Imaging, Multi-vendor and Multi-disease Cardiac Segmentation (2021)

    Google Scholar 

  29. Zhuang, X., et al.: Cardiac segmentation on late gadolinium enhancement MRI: a benchmark study from Multi-sequence cardiac MR segmentation challenge. arXiv: 2006.12434 (2020)

  30. Geirhos, R., et al.: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In: International Conference on Learning Representations, pp. 1–20 (2018)

    Google Scholar 

  31. Antonelli, M., et al.: The medical segmentation decathlon. arXiv:2106.05735 (2021)

  32. B.N. et al.: NCI-ISBI 2013 challenge: automated segmentation of prostate structures. https://www.cancerimagingarchive.net (2015)

  33. Lemaıtre, G., et al.: Computer-Aided detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: a review. Comput. Biol. Med. 60, 8–31 (2015)

    Article  Google Scholar 

  34. Litjens, G., et al.: Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med. Image Anal. 18(2), 359–373 (2014)

    Article  Google Scholar 

  35. Castro, D.C., Walker, I., Glocker, B.: Causality matters in medical imaging. Nat. Commun. 11(1), 3673 (2020)

    Article  Google Scholar 

  36. Fidler, S., Skocaj, D., Leonardis, A.: Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. TPAMI 28(3), 337–350 (2006)

    Article  Google Scholar 

  37. Hendrycks, D., et al.: Augmix: a simple data processing method to improve robustness and uncertainty. In: ICLR (2020)

    Google Scholar 

Download references

Acknowledgment

This work was supported by two EPSRC Programme Grants (EP/P001009/1, EP/W01842X/1) and the UKRI Innovate UK Grant (No.104691).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chen Chen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1532 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, C., Li, Z., Ouyang, C., Sinclair, M., Bai, W., Rueckert, D. (2022). MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13435. Springer, Cham. https://doi.org/10.1007/978-3-031-16443-9_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16443-9_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16442-2

  • Online ISBN: 978-3-031-16443-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics