Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Patient-Specific Modeling Algorithm for Medical Data Based on AUC
Authors: Guilherme Ribeiro, Alexandre Oliveira, Antonio Ferreira, Shyam Visweswaran, Gregory Cooper
Abstract:
Patient-specific models are instance-based learning algorithms that take advantage of the particular features of the patient case at hand to predict an outcome. We introduce two patient-specific algorithms based on decision tree paradigm that use AUC as a metric to select an attribute. We apply the patient specific algorithms to predict outcomes in several datasets, including medical datasets. Compared to the patient-specific decision path (PSDP) entropy-based and CART methods, the AUC-based patient-specific decision path models performed equivalently on area under the ROC curve (AUC). Our results provide support for patient-specific methods being a promising approach for making clinical predictions.Keywords: Approach instance-based, area Under the ROC Curve, Patient-specific Decision Path, clinical predictions.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1338644
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584References:
[1] Abu-Hanna, A. and P. J.:Lucas, Prognostic models in medicine. AI and statistical approaches. Methods of Information in Medicine, 2001. 40(1): p. 1-5.
[2] Visweswaran, S. and G.F. Cooper.: Patient-specific models for predicting the outcomes of patients with community acquired pneumonia. In AMIA Annu Symp Proc. 2005.
[3] T. M. Mitchell.: Machine Learning. 1st. ed, McGraw-Hill, Inc., New York, NY, USA, 1997.
[4] J. H. Friedman.: Lazy decision trees. In Proceedings of the thirteenth national conference on Artificial intelligence, v.1 (AAAI’96), v.1 AAAI pp. 717-724, 1996.
[5] Foster J. Provost, Tom Fawcett, and Ron Kohavi.: The Case against Accuracy Estimation for Comparing Induction Algorithms. In Proceedings of the Fifteenth International Conference on Machine Learning (ICML ’98), San Francisco, CA, USA, 445-453, 1998.
[6] Cover, T. and P. Hart.: Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 1967. 13(1): p. 21-27.
[7] Gagliardi, F.: Instance-based classifiers applied to medical databases: diagnosis and knowledge extraction. Artif Intell Med, 2011. 52(3): p. 123-39.
[8] L. Breiman and J. Friedman and R. Olshen and C. Stone: Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA, 1984.
[9] Zheng, Z.J. and G.I. Webb.: Lazy learning of Bayesian rules. Machine Learning, 200. 41(1): p. 53-84.
[10] Visweswaran, S. and G.F. Cooper.: Instance-specific Bayesian model averaging for classification. In Proceedings of the Eighteenth Annual Conference on Neural Information Processing Systems. Vancouver, Canada, 2004.
[11] Visweswaran, S., et al.: Learning patient-specific predictive models from clinical data. J Biomed Inform, 2010. 43(5): p. 669-685.
[12] Visweswaran, S. and G.F. Cooper.: Learning instance-specific predictive models. Journal of Machine Learning Research, 2010. 11(Dec): p. 3333-3369
[13] Ferreira A, Cooper GF and Visweswaran S.: Decision path models for patient-specific modeling of patient outcomes. Proceedings of the Annual Symposium of the American Medical Informatics Association (2013) 413-21. PMID: 24551347. PMCID: PMC3900188
[14] David J. Hand and Robert J. Till.: A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems. Mach. Learn. 45, 171-186, 2, October, 2001.
[15] Csar Ferri and Peter Flach and Jos Hernndez-Orallo.: Learning Decision Trees Using the Area Under the ROC Curve. Proceedings of the 19th International Conference on Machine Learning, Pages 139-146, Sydney, NSW, Australia, July 8-12, 2002.
[16] Csar Ferri and Peter Flach and Jos Hernndez-Orallo.: Rocking the ROC Analysis within Decision Trees. Technical Report, December, 20, 2001.
[17] Tom Fawcett.: An Introduction to ROC Analysis. Elsevier Science Inc. New York, USA, June, 2006.
[18] Bianca Zadrozny and Charles Elkan.: Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML ’01). San Francisco, CA, USA, 609-616, 2001.
[19] MATLAB: Version 8.4.0 (R2014b) The Mathworks Inc., 2014. Accessed: 02-17-2015
[20] Fayyad, U. M. and Irani, K. B.: Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. ’IJCAI’ , pp. 1022-1029, 1993.
[21] Caruana, R.: A non-parametric EM-style algorithm for imputing missing values. in Proceedings of Artificial Intelligence and Statistics. 2001.
[22] Janez Demˇsar.: Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 7, 1-30, December, 2006.