U.S. flag An official website of the United States government
  1. Home
  2. About FDA
  3. FDA Organization
  4. Center for Devices and Radiological Health
  5. CDRH Offices
  6. MIDA - A Multimodal Imaging-Based Model of the Human Head and Neck
  1. CDRH Offices

MIDA - A Multimodal Imaging-Based Model of the Human Head and Neck

The MIDA model is a multimodal imaging-based detailed anatomical computer model of the human head and neck. The model offers detailed representation of brain surfaces, meninges, cerebrospinal fluid distribution, eyes, ears, and a number of deep brain structures, as well as several distinct muscles, bones and skull layers, blood vessels, cranial nerves, dental structures, and glands. Organs and tissues of the MIDA model are represented by three-dimensional, highly detailed computer-aided design (CAD) objects in standardized CAD data format. The individual CAD objects allow meshing at arbitrary resolutions without loss of small features. The MIDA model can be used in all software capable of importing and manipulating CAD data.

To enhance the visibility of specific tissues, the MIDA model was obtained by integrating different magnetic resonance imaging (MRI) techniques: 1. structural T1- and T2-weighted sequences; 2. a specific T2-weighted sequence with high nerve contrast optimized to enhance the structures of the ear and eye; 3. magnetic resonance angiography (MRA) to image the vasculature, and 4. diffusion tensor imaging (DTI) to obtain information on tissue anisotropy and fiber orientation. The spatial resolution of the model is 500 μm isotropic.

As with the Virtual Family models, the MIDA model can be used for computational modeling studies involving anatomically correct models of the human anatomy, e.g., for electromagnetic simulations. Of particular interest are computational simulations to potentially investigate the safety and efficacy of medical devices in, on, or near the head. For example, the MIDA model was used in case-studies involving non-invasive neurostimulation, i.e., transcranial alternating current stimulation (tACS)1, as well as invasive neurostimulation, i.e., deep brain stimulation (DBS)2,3.

The MIDA model includes segmentation of 153 different structures. Please note that the original MRI data will not be released to protect the privacy of the volunteer participating in the study, in accordance with Swiss and European law. Also, the map of 38 atlas-based thalamic nuclei is currently not available for distribution due to third-party license limitations. With the exclusion of the aforementioned sub-set of data, all other data related to the MIDA model will be available free of charge, subject to a license agreement for the protection of privacy. The released MIDA model will include a total of 116 structures (i.e., 115 structures listed in Table 1 of the manuscript and the background).

The determination of absolute anatomical accuracy and precision in the outlining of the anatomical structures had some limitations due to the lack of a segmentation ground truth. Segmentation errors may be due to: i) inadequacy of the images for the visualization of specific details, e.g., the lack of a specific MRI sequence for the basal ganglia or inadequate spatial resolution for retina/choroid/sclera and small vessels; ii) inaccuracies in the co-registration process used to estimate the alignment between the available image datasets; iii) discrepancies in the definitions of anatomical structures in available literature, e.g., brainstem division and the outlining of pons/cerebellum boundary; and iv) the use of a finite number of discrete tissues, while, in reality, tissues show continuous variation and inhomogeneity. Nevertheless, the chosen segmentation strategy integrated both automatic and knowledge-based segmentation. The former method helps reduce segmentation time and improves the quality of results in terms of consistency, objectivity and, reproducibility, while the latter method minimizes errors resulting from automatic classification by including expert knowledge about the anatomy. Inter-operator variability was assessed and showed to be non-significant and segmentation results were reviewed by an expert anatomist. Furthermore, our proposed model is subject-specific and does not take into account inter-individual anatomical variability. Nonetheless, even a single head model featuring a detailed representation of the anatomy is valuable to formulate hypotheses, e.g., on interaction mechanisms, analyze the impact of parameters, or gain additional understanding on physical and physiological processes.

3D surface reconstruction of the MIDA head and neck model. The model rotates and zooms in, showing progressively representative anatomical structures including the skin, muscles, skull, vertebrae, vessels, ventricles, and deep brain structures.
MIDA Surfaces

Adaptive conformal meshes of the MIDA head and neck model. The model rotates and shows progressively representative anatomical structures including the skin, muscles, skull, vertebrae, vessels, ventricles, and deep brain structures.

Launch MIDA

Contact Information

For inquiries about the MIDA model please email: MIDAmodel@fda.hhs.gov.


The MIDA model was developed through collaboration with the following institutions:
U.S. Food and Drug Administration (FDA), Silver Spring, MD, U.S.; the Foundation for Research on Information Technologies in Society (IT'IS), Zurich, Switzerland; the Swiss Federal Institute of Technology in Zurich (ETHZ), Zurich, Switzerland; the Institute for Biomedical Engineering (University of Zurich and ETHZ), Zurich, Switzerland; the Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria; the Athinoula A. Martinos Center For Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, U.S.; and the Center for Morphometric Analysis, Department of Psychiatry and Neurology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, U.S. Financial support was provided by the U.S. FDA Critical Path Initiative and by the IT'IS Foundation.

  1. Maria Ida Iacono, Esra Neufeld, Esther Akinnagbe, Kelsey Bower, Johanna Wolf, Ioannis Oikonomidis, Deepika Sharma, Bryn Lloyd, Bertram Wilm, Michael Wyss, Klaas Pruessmann, Andras Jakab, Nikos Makris, Ethan Cohen, Niels Kuster, Wolfgang Kainz, and Leonardo M. Angelone, MIDA: A Multimodal Imaging-based Detailed model of the Anatomy of the human head and neck. PLoS ONE 10(4): e0124126. doi:10.1371/journal.pone.0124126, 2015. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0124126
  2. Maria Ida Iacono, Esra Neufeld, Giorgio Bonmassar, Esther Akinnagbe, Andras Jakab, Ethan Cohen, Niels Kuster, Wolfgang Kainz, Leonardo M. Angelone, A computational model for bipolar deep brain stimulation of the subthalamic nucleus. IEEE Eng Med Biol Soc. pp. 6258-61, 2014.
  3. Esra Neufeld, Ioannis Vogiatzis Oikonomidis, Maria Ida Iacono, Esther Akinnagbe, Leonardo M. Angelone, Wolfgang Kainz, and Niels Kuster, Simulation platform for coupled modeling of EM-induced neuronal dynamics and functionalized anatomical models. 7th International IEEE EMBS Neural Engineering Conference, Montpellier, France, 22-25 Apr 2015.

Disclaimer: The Food and Drug Administration (FDA) and the IT’IS Foundation assume no responsibility whatsoever for use of the MIDA model by other parties and make no guarantees, expressed or implied, about the quality, reliability, or any other characteristic of the models. Furthermore, use of the MIDA model in no way implies endorsement by the FDA or confers any advantage in regulatory decisions. The MIDA model is available free of charge from the IT'IS Foundation website subject to a license agreement for protection of privacy. Any use of the MIDA model is to be cited as stipulated in the license agreement. In addition, any derivative work shall bear a notice that it is sourced from the original MIDA model, and any modified versions shall additionally bear the notice that they have been modified. The MIDA model and any derivative work or modified version thereof shall not be distributed to any third party whatsoever.

Back to Top