Validation of the practicability of logical assessment formula for evaluations with inaccurate ground-truth labels: An application study on tumour segmentation for breast cancer
Abstract
The logical assessment formula (LAF) is a new theory proposed for evaluations with inaccurate ground-truth labels (IAGTLs) to assess the predictive models for artificial intelligence applications. However, the practicability of LAF for evaluations with IAGTLs has not yet been validated in real-world practice. In this paper, we applied LAF to two tasks of tumour segmentation for breast cancer (TSfBC) in medical histopathology whole slide image analysis (MHWSIA) for evaluations with IAGTLs. Experimental results and analysis show that the LAF-based evaluations with IAGTLs were unable to confidently act like usual evaluations with accurate ground-truth labels on the one easier task of TSfBC while being able to reasonably act like usual evaluations with AGTLs on the other more difficult task of TSfBC. These results and analysis reflect the potential of LAF applied to MHWSIA for evaluations with IAGTLs. This paper presents the first practical validation of LAF for evaluations with IAGTLs in a real-world application.
References
[1]Yang Y. Logical assessment formula and its principles for evaluations with inaccurate ground-truth labels. Knowledge and Information Systems. 2024; 66(4): 2561–2573. doi: 10.1007/s10115-023-02047-6
[2]Chang HH, Zhuang AH, Valentino DJ, et al. Performance measure characterization for evaluating neuroimage segmentation algorithms. NeuroImage. 2009; 47(1): 122–135. doi: 10.1016/j.neuroimage.2009.03.068
[3]Taha AA, Hanbury A. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Medical Imaging. 2015; 15(1). doi: 10.1186/s12880-015-0068-x
[4]M H, M.N S. A Review on Evaluation Metrics for Data Classification Evaluations. International Journal of Data Mining & Knowledge Management Process. 2015; 5(2): 01–11. doi: 10.5121/ijdkp.2015.5201
[5]Jung HJ, Lease M. Evaluating Classifiers Without Expert Labels. Published online 2012. doi: 10.48550/ARXIV.1212.0960
[6]Deng W, Zheng L. Are Labels Always Necessary for Classifier Accuracy Evaluation? 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Published online June 2021. doi: 10.1109/cvpr46437.2021.01482
[7]Joyce RJ, Raff E, Nicholas C. A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels. Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security. Published online November 15, 2021. doi: 10.1145/3474369.3486867
[8]Bouix S, Martin-Fernandez M, Ungar L, et al. On evaluating brain tissue classifiers without a ground truth. NeuroImage. 2007; 36(4): 1207–1224. doi: 10.1016/j.neuroimage.2007.04.031
[9]Warfield SK, Zou KH, Wells WM. Simultaneous Truth and Performance Level Estimation (STAPLE): An Algorithm for the Validation of Image Segmentation. IEEE Transactions on Medical Imaging. 2004; 23(7): 903–921. doi: 10.1109/tmi.2004.828354
[10]Martin-Fernandez M, Bouix S, Ungar L, et al. Two Methods for Validating Brain Tissue Classifiers. In: Duncan JS, Gerig G. (editors). Medical Image Computing and Computer-Assisted Intervention—MICCAI 2005. Springer Berlin Heidelberg; 2005. pp 515–522.
[11]Yang Y, Li F, Wei Y, et al. One-step abductive multi-target learning with diverse noisy samples and its application to tumour segmentation for breast cancer. Expert Systems with Applications. 2024; 251: 123923. doi: 10.1016/j.eswa.2024.123923
[12]Patrini G, Rozza A, Menon AK, et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Published online July 2017. doi: 10.1109/cvpr.2017.240
[13]Reed SE, Lee H, Anguelov D, et al. Training deep neural networks on noisy labels with bootstrapping. In: Proceeding of 3rd International Conference on Learning Representations, ICLR 2015 - Workshop Track Proceedings. 2015.
[14]Arazo E, Ortego D, Albert P, et al. Unsupervised label noise modeling and loss correction. In: 36th International Conference on Machine Learning; 2019.
[15]Ma X, Wang Y, Houle ME, et al. Dimensionality-Driven learning with noisy labels. In: 35th International Conference on Machine Learning, ICML; 2018.
[16]Wang Y, Ma X, Chen Z, et al. Symmetric Cross Entropy for Robust Learning With Noisy Labels. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Published online October 2019. doi: 10.1109/iccv.2019.00041
[17]Liu Y, Guo H. Peer loss functions: Learning from noisy labels without knowing noise rates. In: 37th International Conference on Machine Learning; 2020.
[18]Yao Y, Liu T, Han B, et al. Dual T: Reducing estimation error for transition matrix in label-noise learning. In: Advances in Neural Information Processing Systems; 2020.
[19]Ma X, Huang H, Wang Y, et al. Normalized loss functions for deep learning with noisy labels. In: Processing of 37th International Conference on Machine Learning; 2020.
[20]Yang Y, Yang Y, Yuan Y, et al. Detecting helicobacter pylori in whole slide images via weakly supervised multi-task learning. Multimedia Tools and Applications. 2020; 79(35–36): 26787–26815. doi: 10.1007/s11042-020-09185-x
[21]Yang Y, Yang Y, Chen J, et al. Handling noisy labels via one-step abductive multi-target learning and its application to helicobacter pylori segmentation. Multimedia Tools and Applications. 2024; 83(24): 65099–65147. doi: 10.1007/s11042-023-17743-2
[22]Yang Y. Discovering Scientific Paradigms for Artificial Intelligence Alignment. 2023. doi: 10.13140/RG.2.2.15945.52320
[23]Frenay B, Verleysen M. Classification in the Presence of Label Noise: A Survey. IEEE Transactions on Neural Networks and Learning Systems. 2014; 25(5): 845–869. doi: 10.1109/tnnls.2013.2292894
[24]Song H, Kim M, Park D, et al (2020) Learning from Noisy Labels with Deep Neural Networks: A Survey
[25]Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Published online June 2015. doi: 10.1109/cvpr.2015.7298965
Copyright (c) 2024 Yongquan Yang, Hong Bu
This work is licensed under a Creative Commons Attribution 4.0 International License.