Mostrar el registro sencillo del ítem

Fusión de etiquetas basado en múltiples atlas usando ponderaciones locales supervisadas

dc.creatorCárdenas-Peña, David
dc.creatorFernández, Eduardo
dc.creatorFerrández-Vicente, José M.
dc.creatorCastellanos-Domínguez, German
dc.date2017-05-02
dc.date.accessioned2021-03-18T21:06:52Z
dc.date.available2021-03-18T21:06:52Z
dc.identifierhttps://revistas.itm.edu.co/index.php/tecnologicas/article/view/724
dc.identifier10.22430/22565337.724
dc.identifier.urihttp://test.repositoriodigital.com:8080/handle/123456789/11715
dc.descriptionThe automatic segmentation of interest structures is devoted to the morphological analysis of brain magnetic resonance imaging volumes. It demands significant efforts due to its complicated shapes and since it lacks contrast between tissues and intersubject anatomical variability. One aspect that reduces the accuracy of the multi-atlasbased segmentation is the label fusion assumption of one-to-one correspondences between targets and atlas voxels. To improve the performance of brain image segmentation, label fusion approaches include spatial and intensity information by using voxel-wise weighted voting strategies. Although the weights are assessed for a predefined atlas set, they are not very efficient for labeling intricate structures since most tissue shapes are not uniformly distributed in the images. This paper proposes a methodology of voxel-wise feature extraction based on the linear combination of patch intensities. As far as we are concerned, this is the first attempt to locally learn the features by maximizing the centered kernel alignment function. Our methodology aims to build discriminative representations, deal with complex structures, and reduce the image artifacts. The result is an enhanced patch-based segmentation of brain images. For validation, the proposed brain image segmentation approach is compared against Bayesian-based and patch-wise label fusion on three different brain image datasets. In terms of the determined Dice similarity index, our proposal shows the highest segmentation accuracy (90.3% on average); it presents sufficient artifact robustness, and provides suitable repeatability of the segmentation results.en-US
dc.descriptionLa segmentación automática de estructuras de interés en imágenes de resonancia magnética cerebral requiere esfuerzos significantes, debido a las formas complicadas, el bajo contraste y la variabilidad anatómica. Un aspecto que reduce el desempeño de la segmentación basada en múltiples atlas es la suposición de correspondencias uno-a-uno entre los voxeles objetivo y los del atlas. Para mejorar el desempeño de la segmentación, las metodologías de fusión de etiquetas incluyen información espacial y de intensidad a través de estrategias de votación ponderada a nivel de voxel. Aunque los pesos se calculan para un conjunto de atlas predefinido, estos no son muy eficientes en etiquetar estructuras intrincadas, ya que la mayoría de las formas de los tejidos no se distribuyen uniformemente en las imágenes. Este artículo propone una metodología de extracción de características a nivel de voxel basado en la combinación lineal de las intensidades de un parche. Hasta el momento, este es el primer intento de extraer características locales maximizando la función de alineamiento de kernel centralizado, buscando construir representaciones discriminativas, superar la complejidad de las estructuras, y reducir la influencia de los artefactos. Para validar los resultados, la estrategia de segmentación propuesta se compara contra la segmentación Bayesiana y la fusión de etiquetas basada en parches en tres bases de datos diferentes. Respecto del índice de similitud Dice, nuestra propuesta alcanza el más alto acierto (90.3% en promedio) con suficiente robusticidad ante los artefactos y respetabilidad apropiada.es-ES
dc.formatapplication/pdf
dc.languageeng
dc.publisherInstituto Tecnológico Metropolitano (ITM)en-US
dc.relationhttps://revistas.itm.edu.co/index.php/tecnologicas/article/view/724/700
dc.relation/*ref*/E. E. Bron, M. Smits, W. M. van der Flier, H. Vrenken, F. Barkhof, P. Scheltens, J. M. Papma, R. M. E. Steketee, C. Méndez Orellana, R. Meijboom, M. Pinto, J. R. Meireles, C. Garrett, A. J. Bastos-Leite, A. Abdulkadir, O. Ronneberger, N. Amoroso, R. Bellotti, D. Cárdenas-Peña, A. M. Álvarez-Meza, C. V. Dolph, K. M. Iftekharuddin, S. F. Eskildsen, P. Coupé, V. S. Fonov, K. Franke, C. Gaser, C. Ledig, R. Guerrero, T. Tong, K. R. Gray, E. Moradi, J. Tohka, A. Routier, S. Durrleman, A. Sarica, G. Di Fatta, F. Sensi, A. Chincarini, G. M. Smith, Z. V. Stoyanov, L. Sørensen, M. Nielsen, S. Tangaro, P. Inglese, C. Wachinger, M. Reuter, J. C. van Swieten, W. J. Niessen, and S. Klein, “Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: The CADDementia challenge,” Neuroimage, vol. 111, pp. 562–579, May 2015. [2] J. D. Martinez-Vargas, G. Strobbe, K. Vonck, P. van Mierlo, and G. Castellanos-Dominguez, “Improved Localization of Seizure Onset Zones Using Spatiotemporal Constraints and Time-Varying Source Connectivity,” Front. Neurosci., vol. 11, p. 156, Apr. 2017. [3] P. A. Valdés-Hernández, N. von Ellenrieder, A. Ojeda-González, S. Kochen, Y. Alemán-Gómez, C. Muravchik, and P. A. Valdés-Sosa, “Approximate average head models for EEG source imaging.,” J. Neurosci. Methods, vol. 185, no. 1, pp. 125–32, Dec. 2009. [4] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images,” IEEE Trans. Med. Imaging, vol. 35, no. 5, pp. 1240–1251, May 2016. [5] R. Magalhães, P. Marques, J. Soares, V. Alves, and N. Sousa, “The Impact of Normalization and Segmentation on Resting-State Brain Networks,” Brain Connect., vol. 5, no. 3, pp. 166–176, Apr. 2015. [6] J. Ahdidan, C. A. Raji, E. A. DeYoe, J. Mathis, K. Ø. Noe, J. Rimestad, T. K. Kjeldsen, J. Mosegaard, J. T. Becker, and O. Lopez, “Quantitative Neuroimaging Software for Clinical Assessment of Hippocampal Volumes on MR Imaging,” J. Alzheimer’s Dis., vol. 49, no. 3, pp. 723–732, Oct. 2015. [7] M. Ganzetti, N. Wenderoth, and D. Mantini, “Quantitative Evaluation of Intensity Inhomogeneity Correction Methods for Structural MR Brain Images,” Neuroinformatics, vol. 14, no. 1, pp. 5–21, Jan. 2016. [8] J. Ashburner and K. J. Friston, “Unified segmentation,” Neuroimage, vol. 26, no. 3, pp. 839–851, Jul. 2005. [9] P. F. Raudaschl, P. Zaffino, G. C. Sharp, M. F. Spadea, A. Chen, B. M. Dawant, T. Albrecht, T. Gass, C. Langguth, M. Lüthi, F. Jung, O. Knapp, S. Wesarg, R. Mannion-Haworth, M. Bowes, A. Ashman, G. Guillard, A. Brett, G. Vincent, M. Orbes-Arteaga, D. Cárdenas-Peña, G. Castellanos-Dominguez, N. Aghdasi, Y. Li, A. Berens, K. Moe, B. Hannaford, R. Schubert, and K. D. Fritscher, “Evaluation of segmentation methods on head and neck CT: Auto-segmentation challenge 2015.,” Med. Phys., vol. 44, no. 5, pp. 2020–2036, May 2017. [10] J. E. Iglesias and M. R. Sabuncu, “Multi-atlas segmentation of biomedical images: A survey,” Med. Image Anal., vol. 24, no. 1, pp. 205–219, Aug. 2015. [11] J. V Manjón and P. Coupé, “volBrain: An Online MRI Brain Volumetry System.,” Front. Neuroinform., vol. 10, p. 30, Jul. 2016. [12] P. Aljabar, R. A. Heckemann, A. Hammers, J. V Hajnal, and D. Rueckert, “Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy.,” Neuroimage, vol. 46, no. 3, pp. 726–738, Jul. 2009. [13] F. M. Sukno, S. Ordas, C. Butakoff, S. Cruz, and A. F. Frangi, “Active Shape Models with Invariant Optimal Features: Application to Facial Analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 7, pp. 1105–1117, Jul. 2007. [14] B. Patenaude, S. M. Smith, D. N. Kennedy, and M. Jenkinson, “A Bayesian model of shape and appearance for subcortical brain segmentation,” Neuroimage, vol. 56, no. 3, pp. 907–922, Jun. 2011. [15] C. Chu, M. Oda, T. Kitasaka, K. Misawa, M. Fujiwara, Y. Hayashi, Y. Nimura, D. Rueckert, and K. Mori, “Multi-organ Segmentation Based on Spatially-Divided Probabilistic Atlas from 3D Abdominal CT Images,” in MICCAI 2013: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2013, Springer, Berlin, Heidelberg, 2013, pp. 165–172. [16] F. E.-Z. A. El-Gamal, M. Elmogy, and A. Atwan, “Current trends in medical image registration and fusion,” Egypt. Informatics J., vol. 17, no. 1, pp. 99–124, Mar. 2016. [17] I. Despotović, B. Goossens, and W. Philips, “MRI Segmentation of the Human Brain: Challenges, Methods, and Applications,” Comput. Math. Methods Med., vol. 2015, pp. 1–23, 2015. [18] F. Rousseau, P. A. Habas, and C. Studholme, “A Supervised Patch-Based Approach for Human Brain Labeling,” IEEE Trans. Med. Imaging, vol. 30, no. 10, pp. 1852–1862, Oct. 2011. [19] I. Išgum, M. Staring, A. Rutten, M. Prokop, M. A. Viergever, and B. Van Ginneken, “Multi-atlas-based segmentation with local decision fusion-application to cardiac and aortic segmentation in CT scans,” IEEE Trans. Med. Imaging, vol. 28, no. 7, pp. 1000–1010, 2009. [20] M. Liu, A. Kitsch, S. Miller, V. Chau, K. Poskitt, F. Rousseau, D. Shaw, and C. Studholme, “Patch-based augmentation of Expectation–Maximization for brain MRI tissue segmentation at arbitrary age after premature birth,” Neuroimage, vol. 127, pp. 387–408, Feb. 2016. [21] D. Zhang, Q. Guo, G. Wu, and D. Shen, “Sparse Patch-Based Label Fusion for Multi-Atlas Segmentation,” in Multimodal Brain Image Analysis, vol. 7509, P.-T. Yap, T. Liu, D. Shen, C.-F. Westin, and L. Shen, Eds. Springer Berlin Heidelberg, 2012, pp. 94–102. [22] T. Tong, R. Wolz, P. Coupé, J. V Hajnal, and D. Rueckert, “Segmentation of MR images via discriminative dictionary learning and sparse coding: Application to hippocampus labeling,” Neuroimage, vol. 76, pp. 11–23, Aug. 2013. [23] T. Tong, R. Wolz, Z. Wang, Q. Gao, K. Misawa, M. Fujiwara, K. Mori, J. V Hajnal, and D. Rueckert, “Discriminative dictionary learning for abdominal multi-organ segmentation.,” Med. Image Anal., vol. 23, no. 1, pp. 92–104, Jul. 2015. [24] W. Bai, W. Shi, C. Ledig, and D. Rueckert, “Multi-atlas segmentation with augmented features for cardiac MR images,” Med. Image Anal., vol. 19, no. 1, pp. 98–109, Jan. 2015. [25] N. Cordier, H. Delingette, and N. Ayache, “A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling,” IEEE Trans. Med. Imaging, vol. 35, no. 4, pp. 1066–1076, Apr. 2016. [26] M. Yan, H. Liu, X. Xu, E. Song, Y. Qian, N. Pan, R. Jin, L. Jin, S. Cheng, and C.-C. Hung, “An improved label fusion approach with sparse patch-based representation for MRI brain image segmentation,” Int. J. Imaging Syst. Technol., vol. 27, no. 1, pp. 23–32, Mar. 2017. [27] G. Ma, Y. Gao, G. Wu, L. Wu, and D. Shen, “Atlas-Guided Multi-channel Forest Learning for Human Brain Labeling,” in Medical Computer Vision: Algorithms for Big Data, vol. 8848, B. Menze, G. Langs, A. Montillo, M. Kelm, H. Müller, S. Zhang, W. Cai, and D. Metaxas, Eds. Cham: Springer International Publishing, 2014, pp. 97–104. [28] Y. Hao, T. Wang, X. Zhang, Y. Duan, C. Yu, T. Jiang, and Y. Fan, “Local label learning (LLL) for subcortical structure segmentation: Application to hippocampus segmentation,” Hum. Brain Mapp., vol. 35, no. 6, pp. 2674–2697, Jun. 2014. [29] G. Sanroma, G. Wu, Y. Gao, K.-H. Thung, Y. Guo, and D. Shen, “A transversal approach for patch-based label fusion via matrix completion,” Med. Image Anal., vol. 24, no. 1, pp. 135–148, 2015. [30] R. Wolz, C. Chu, K. Misawa, M. Fujiwara, K. Mori, and D. Rueckert, “Automated Abdominal Multi-Organ Segmentation With Subject-Specific Atlas Generation,” IEEE Trans. Med. Imaging, vol. 32, no. 9, pp. 1723–1730, Sep. 2013. [31] S. Lee, S. H. Park, H. Shim, I. D. Yun, and S. U. Lee, “Optimization of local shape and appearance probabilities for segmentation of knee cartilage in 3-D MR images,” Comput. Vis. Image Underst., vol. 115, no. 12, pp. 1710–1720, Dec. 2011. [32] C. Feng, D. Zhao, and M. Huang, “Image segmentation using CUDA accelerated non-local means denoising and bias correction embedded fuzzy c-means (BCEFCM),” Signal Processing, vol. 122, pp. 164–189, May 2016. [33] O. V Senyukova and A. Y. Zubov, “Full anatomical labeling of magnetic resonance images of human brain by registration with multiple atlases,” Program. Comput. Softw., vol. 42, no. 6, pp. 356–360, Nov. 2016. [34] G. Sanroma, O. M. Benkarim, G. Piella, G. Wu, X. Zhu, D. Shen, and M. Á. G. Ballester, “Discriminative Dimensionality Reduction for Patch-Based Label Fusion,” in Machine Learning Meets Medical Imaging, Springer, Cham, 2015, pp. 94–103. [35] C. Cortes, M. Mohri, and A. Rostamizadeh, “Algorithms for Learning Kernels Based on Centered Alignment,” J. Mach. Learn., vol. 13, pp. 795–828, Mar. 2012. [36] B. B. Avants, N. J. Tustison, G. Song, P. A. Cook, A. Klein, and J. C. Gee, “A reproducible evaluation of ANTs similarity metric performance in brain image registration.,” Neuroimage, vol. 54, no. 3, pp. 2033–44, Feb. 2011. [37] P. Coupé, J. V Manjón, V. Fonov, J. Pruessner, M. Robles, and D. L. Collins, “Patch-based segmentation using expert priors: application to hippocampus and ventricle segmentation.,” Neuroimage, vol. 54, no. 2, pp. 940–954, Jan. 2011. [38] S. Roy, A. Carass, J. L. Prince, and D. L. Pham, “Subject Specific Sparse Dictionary Learning for Atlas Based Brain MRI Segmentation,” in Mach Learn Med Imaging, vol. 8679, 2014, pp. 248–255.
dc.rightshttps://creativecommons.org/licenses/by/3.0/deed.es_ESen-US
dc.sourceTecnoLógicas; Vol. 20 No. 39 (2017); 209-225en-US
dc.sourceTecnoLógicas; Vol. 20 Núm. 39 (2017); 209-225es-ES
dc.source2256-5337
dc.source0123-7799
dc.subjectBrain image segmentationen-US
dc.subjectlabel fusionen-US
dc.subjectmulti-atlas segmentationen-US
dc.subjectSegmentación de imágenes cerebraleses-ES
dc.subjectfusión de etiquetases-ES
dc.subjectsegmentación con múltiples atlases-ES
dc.titleMulti-atlas label fusion by using supervised local weighting for brain image segmentationen-US
dc.titleFusión de etiquetas basado en múltiples atlas usando ponderaciones locales supervisadases-ES
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion
dc.typeResearch Papersen-US
dc.typeArtículos de investigaciónes-ES


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem