Search results for: prediction of publications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2637

Search results for: prediction of publications

1737 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 137
1736 Validation of Nutritional Assessment Scores in Prediction of Mortality and Duration of Admission in Elderly, Hospitalized Patients: A Cross-Sectional Study

Authors: Christos Lampropoulos, Maria Konsta, Vicky Dradaki, Irini Dri, Konstantina Panouria, Tamta Sirbilatze, Ifigenia Apostolou, Vaggelis Lambas, Christina Kordali, Georgios Mavras

Abstract:

Objectives: Malnutrition in hospitalized patients is related to increased morbidity and mortality. The purpose of our study was to compare various nutritional scores in order to detect the most suitable one for assessing the nutritional status of elderly, hospitalized patients and correlate them with mortality and extension of admission duration, due to patients’ critical condition. Methods: Sample population included 150 patients (78 men, 72 women, mean age 80±8.2). Nutritional status was assessed by Mini Nutritional Assessment (MNA full, short-form), Malnutrition Universal Screening Tool (MUST) and short Nutritional Appetite Questionnaire (sNAQ). Sensitivity, specificity, positive and negative predictive values and ROC curves were assessed after adjustment for the cause of current admission, a known prognostic factor according to previously applied multivariate models. Primary endpoints were mortality (from admission until 6 months afterwards) and duration of hospitalization, compared to national guidelines for closed consolidated medical expenses. Results: Concerning mortality, MNA (short-form and full) and SNAQ had similar, low sensitivity (25.8%, 25.8% and 35.5% respectively) while MUST had higher sensitivity (48.4%). In contrast, all the questionnaires had high specificity (94%-97.5%). Short-form MNA and sNAQ had the best positive predictive value (72.7% and 78.6% respectively) whereas all the questionnaires had similar negative predictive value (83.2%-87.5%). MUST had the highest ROC curve (0.83) in contrast to the rest questionnaires (0.73-0.77). With regard to extension of admission duration, all four scores had relatively low sensitivity (48.7%-56.7%), specificity (68.4%-77.6%), positive predictive value (63.1%-69.6%), negative predictive value (61%-63%) and ROC curve (0.67-0.69). Conclusion: MUST questionnaire is more advantageous in predicting mortality due to its higher sensitivity and ROC curve. None of the nutritional scores is suitable for prediction of extended hospitalization.

Keywords: duration of admission, malnutrition, nutritional assessment scores, prognostic factors for mortality

Procedia PDF Downloads 346
1735 The Significance of Oranyan Festival among the Oyo Yoruba

Authors: Emmanuel Bole Akinpelu

Abstract:

Festival is a social event that takes place every year which showcase culture and other social activities that usually take place in an environment or town. However, Oranyan Festival is an annual event organized and celebrated in Oyo town in honor of Oranyan the great who is reputed to be the overall head of the Kings of the Yoruba. This event is attended by people from all works of life. The Oyos are used to celebrating various cultural festivals; like Ogun, Oya, Sango, Egungun, Obatala and others. However, Oranyan festival in Oyo is a recent development in honour of Oranyan. He was said to be powerful and an embodiment of a unique cultural tradition. The study examined the significance of the festival to the Oyo Yoruba group. Oyo Yoruba cultural heritage include; Ewi, Ijala, Traditional food ‘Amala and Gbegiri’, Ekun Iyawo, (Bridal Chants), Traditional Music, Traditional Dance, Traditional Game ‘Ayo Olopon’ Eke (Traditional wrestling) and others. Data for this work was gathered through archival sources as journals and relevant publications on the various Oyo Yoruba Traditional Art and Culture. The study is of the opinion that the festival has influence over the religion, Political, economic and other aspects of the modern day traditions. The study also revealed that Oranyan Festival made people to have a better understanding of their rich Cultural Heritage and promoted unity among all and sundry. It also promotes peace among the people. Conclusively, it promotes the rich Cultural Heritage of Oyo Yoruba’s both within and outside NIGERIA and the world at large.

Keywords: Yoruba Oyo, arts and culture, Oranyan, festival

Procedia PDF Downloads 302
1734 Strategies for Improving Teaching and Learning in Higher Institutions: Case Study of Enugu State University of Science and Technology, Nigeria

Authors: Gertrude Nkechi Okenwa

Abstract:

Higher institutions, especially the universities that are saddled with the responsibilities of teaching, learning, research, publications and social services for the production of graduates that are worthy in learning and character, and the creation of up-to-date knowledge and innovations for the total socio-economic and even political development of a given nation. Therefore, the purpose of the study was to identify the teaching, learning techniques used in the Enugu State University of Science and Technology to ensure or ascertain students’ perception on these techniques. To guide the study, survey research method was used. The population for the study was made up of second and final year students which summed up to one hundred and twenty-six students in the faculty of education. Stratified random sampling technique was adopted. A sample size of sixty (60) students was drawn for the study. The instrument used for data collection was questionnaire. To analyze the data, mean and standard deviation were used to answers the research questions. The findings revealed that direct instruction and construction techniques are used in the university. On the whole, it was observed that the students perceived constructivist techniques to be more useful and effective than direct instruction technique. Based on the findings recommendations were made to include diversification of teaching techniques among others.

Keywords: Strategies, Teaching and Learning, Constructive Technique, Direct Instructional Technique

Procedia PDF Downloads 541
1733 Effective Leadership in the Engineering, Technology, and Construction Industry

Authors: David W. Farler, Perry Haan

Abstract:

This paper explores what effective leadership is being employed in the engineering, technology, and construction (ETC) industry. Organizations need to understand what character traits are being used and what leadership styles work to promote sustainability and improve the triple bottom line. This paper looks at multiple publications on leadership and character traits effective for managers and leaders in the ETC industry. The ETC industry is a trillion-dollar industry, and understanding ways to improve leadership is vital for organizations' successful outcomes. With improvements to the managerial and leadership, there could be ways for organizations to profit more and cut down on cost costs. Finding ways to improve motivation can help organizations improve safety, improve culture, and increase employee motivation. From the research, this paper has found that situational leadership, transformational, and transactional are the most effective leadership styles that individuals can use in the ETC industry for leadership. Character traits that are the most effective have been identified in this research paper. This research has contributed to the ways individuals who start in the engineering and technology industry can improve upon their leadership skills as they are promoted into managerial and leadership roles. The need for managerial positions in the ETC industry, such as project and construction managers, to improve is vital for successful outcomes and creating a high-level performance. The study helps provide a gap in the limited research available to improve ETC leadership for all organizations' present and future.

Keywords: construction, effective leadership, engineering, technology

Procedia PDF Downloads 140
1732 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 55
1731 The Prediction of Evolutionary Process of Coloured Vision in Mammals: A System Biology Approach

Authors: Shivani Sharma, Prashant Saxena, Inamul Hasan Madar

Abstract:

Since the time of Darwin, it has been considered that genetic change is the direct indicator of variation in phenotype. But a few studies in system biology in the past years have proposed that epigenetic developmental processes also affect the phenotype thus shifting the focus from a linear genotype-phenotype map to a non-linear G-P map. In this paper, we attempt at explaining the evolution of colour vision in mammals by taking LWS/ Long-wave sensitive gene under consideration.

Keywords: evolution, phenotypes, epigenetics, LWS gene, G-P map

Procedia PDF Downloads 521
1730 Budget Discipline and National Prosperity: The Nigerian Experience

Authors: Ben-Caleb Egbide, Iyoha Francis, Egharevba Mathew, Oduntan Emmanuel

Abstract:

The prosperity of any nation is determined not just by the availability of resources, but also by the discipline exercised in the management of those resources. This paper examines the functional association between adherence to budgetary estimates or budget discipline (BDISC) and national prosperity proxied by Real Gross Domestic Product (RGDP) and Relative Poverty Index (RPI)/Human Development Index (HDI). Adopting a longitudinal retrospective research strategy, time series data relating to both the endogenous and exogenous variables were extracted from official government publications for 36 years’ (1980-2015 in the case of RGDP and RPI), and for 26 years (1990-2015 in the case of HDI). Ordinary Least Square (OLS), as well as cointegration regressions, were employed to gauge both the short term and long term impact of BDISC on RPI/HDI and RGDP. The results indicated that BDISC is directly related with RGDP but indirectly related with RPI. The implication is that while adherence to budgetary estimate can enhance economic growth, it has the capacity to slow down the rate of poverty in the long run. The paper, therefore, recommend stricter adherence to budgets as a way out of economic under performance in Nigeria and engender the process of promoting human development and national prosperity.

Keywords: budget discipline, human development index, national prosperity, Nigeria

Procedia PDF Downloads 238
1729 Effects of National Policy on Montana Medicaid Coverage and Enrollment

Authors: Ryan J. Trefethen, Vincent H. Smith

Abstract:

This study explores the relationship between national spending on the Medicaid program, and total Medicaid spending and enrollment in Montana, a state that ranks thirty-third in per capita income and thirty-seventh in median household income in the United States. The purpose of the research is to estimate the potential effects that specific changes to national healthcare policy would likely have on funding for the Montana Medicaid Program and enrollees in the program, members of families in poverty whose incomes are low, even though in many cases they have steady jobs. A particular concern is the effect on access to care for children in poverty who tend to be food insecure and, therefore, especially in need of access to health care. The research uses data collected from a variety of government publications, including the Medicaid Financial Management Report, the Medicaid Managed Care Enrollment Report, and the Centers for Medicare and Medicaid Services MSIS State Summaries for fiscal years 2000-2015. These data were examined using econometric analysis, to assess these impacts. The evidence indicates that the changes included in recent congressional legislative initiatives would potentially leave an additional 50,000 to 60,000 Montana residents, five to six percent of the state’s population, in poverty without access to health care. Impacts on children in poverty would potentially be substantial.

Keywords: children, healthcare, medicaid, montana, poverty

Procedia PDF Downloads 254
1728 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites

Authors: Yung-Chung Chuang

Abstract:

The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.

Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics

Procedia PDF Downloads 142
1727 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron

Authors: Filippo Portera

Abstract:

Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.

Keywords: loss, binary-classification, MLP, weights, regression

Procedia PDF Downloads 95
1726 Rigorous Literature Review: Open Science Policy

Authors: E. T. Svahn

Abstract:

This article documents how open science policy is perceived in the scientific literature globally throughout the history. It also presents what policy needs are persistent to enable safe and effective dissemination of scientific knowledge. This information may be of interest to open science and science policy makers globally, especially in the view of recent adoption of supranational open science policies such as Plan S. Evaluation of open science policy landscape is in pressing need of assessment regarding its impact on the research community and society at wide as no previous literature review has been conducted on the topic. This study is a rigorous literature review based on constructivist grounded theory method on the full body of scientific open science policy publications. Selection of these articles has been conducted in 2019 and 2020 in major global knowledge databases. Through the analysis of these articles, two key themes emerged that are seen to shape the relationship between science and society. 1st is that of the policy enabling open science in a safe and effective way, and 2nd is that of the outcome of the science policy may have on the research community and the wider society. These findings accentuate that open science policies can have a major impact on not only research process and availability of knowledge but also on society itself. As an outcome of this study, a theoretical framework is constructed, and the need for further study on open science policy itself on a higher level becomes apparent.

Keywords: constructivist grounded theory, open science policy, rigorous literature review, science policy

Procedia PDF Downloads 167
1725 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 84
1724 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 240
1723 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section

Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert

Abstract:

Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.

Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics

Procedia PDF Downloads 258
1722 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries

Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand

Abstract:

Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.

Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.

Procedia PDF Downloads 75
1721 Potential Determinants of Research Output: Comparing Economics and Business

Authors: Osiris Jorge Parcero, Néstor Gandelman, Flavia Roldán, Josef Montag

Abstract:

This paper uses cross-country unbalanced panel data of up to 146 countries over the period 1996 to 2015 to be the first study to identify potential determinants of a country’s relative research output in Economics versus Business. More generally, it is also one of the first studies comparing Economics and Business. The results show that better policy-related data availability, higher income inequality, and lower ethnic fractionalization relatively favor economics. The findings are robust to two alternative fixed effects specifications, three alternative definitions of economics and business, two alternative measures of research output (publications and citations), and the inclusion of meaningful control variables. To the best of our knowledge, our paper is also the first to demonstrate the importance of policy-related data as drivers of economic research. Our regressions show that the availability of this type of data is the single most important factor associated with the prevalence of economics over business as a research domain. Thus, our work has policy implications, as the availability of policy-related data is partially under policy control. Moreover, it has implications for students, professionals, universities, university departments, and research-funding agencies that face choices between profiles oriented toward economics and those oriented toward business. Finally, the conclusions show potential lines for further research.

Keywords: research output, publication performance, bibliometrics, economics, business, policy-related data

Procedia PDF Downloads 134
1720 Cardiopulmonary Disease in Bipolar Disorder Patient with History of SJS: Evidence Based Case Report

Authors: Zuhrotun Ulya, Muchammad Syamsulhadi, Debree Septiawan

Abstract:

Patients with bipolar disorder are three times more likely to suffer cardiovascular disorders than the general population, which will influence their level of morbidity and rate of mortality. Bipolar disorder also affects the pulmonary system. The choice of long term-monotherapy and other combinative therapies have clinical impacts on patients. This study investigates the case of a woman who has been suffering from bipolar disorder for 16 years, and who has a history of Steven Johnson Syndrome. At present she is suffering also from cardiovascular and pulmonary disorder. An analysis of the results of this study suggests that there is a relationship between cardiovascular disorder, drug therapies, Steven Johnson Syndrome and mood stabilizer obtained from the PubMed, Cochrane, Medline, and ProQuest (publications between 2005 and 2015). Combination therapy with mood stabilizer is recommended for patients who do not have side effect histories from these drugs. The replacement drugs and combinations may be applied, especially for those with bipolar disorders, and the combination between atypical antipsychotic groups and mood stabilizers is often made. Clinicians, however, should be careful with the patients’ physical and metabolic changes, especially those who have experienced long-term therapy and who showed a history of Steven Johnson Syndrome (for which clinicians probably prescribed one type of medicine).

Keywords: cardiopulmonary disease, bipolar disorder, SJS, therapy

Procedia PDF Downloads 430
1719 Regional Competitiveness and Innovation in the Tourism Sector: A Systematic Review and Bibliometric Analysis

Authors: Sérgio J. Teixeira, João J. Ferreira

Abstract:

Tourism frequently gets identified as one of the sectors with the greatest potential for expansion on a global scale and hence conveying the importance of attempting to better understand the regional factors of competitiveness prevailing in this sector. This study’s objective essentially strives to provide a mapping of the scientific publications and the intellectual knowledge therein contained while conveying past research trends and identifying potential future lines of research in the fields of regional competitiveness and tourism innovation. This correspondingly deploys a systematic review of the literature in keeping with the bibliometric approach based upon VOSviewer software, with a particular focus on drafting maps for visualising the underlying intellectual structure. This type of analysis encapsulates the number of articles published and their annual number of citations for the period between 1900 and 2016 as registered by the Web of Science database. The results demonstrate how the intellectual structure on regional competitiveness divides essentially into three major categories: regional competitiveness, tourism innovation, and tourism clusters. Thus, the main contribution of this study arises out of identifying the main research trends in this field and the respective shortcomings and specific needs for future scientific research in the field of regional competitiveness and innovation in tourism.

Keywords: regional competitiveness, tourism cluster, bibliometric studies, tourism innovation, systematic review

Procedia PDF Downloads 234
1718 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 583
1717 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 136
1716 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 252
1715 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks

Authors: Wang Yichen, Haruka Yamashita

Abstract:

In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.

Keywords: recurrent neural network, players lineup, basketball data, decision making model

Procedia PDF Downloads 133
1714 Rethinking Peace Journalism in Pakistan: A Critical Analysis of News Discourse on the Afghan Refugee Repatriation Conflict

Authors: Ayesha Hasan

Abstract:

This study offers unique perspectives and analyses of peace and conflict journalism through interpretative repertoire, media frames, and critical discourse analyses. Two major English publications in Pakistan, representing both long and short-form journalism, are investigated to uncover how the Afghan refugee repatriation from Pakistan in 2016-17 has been framed in Pakistani English media. Peace journalism focuses on concepts such as peace initiatives and peace building, finding common ground, and preventing further conflict. This study applies Jake Lynch’s Coding Criteria to guide the critical discourse analysis and Lee and Maslog’s Peace Journalism Quotient to examine the extent of peace journalism in each text. This study finds that peace journalism is missing in Pakistani English press, but represented, to an extent, in long-form print and online coverage. Two new alternative frames are also proposed. This study gives an in-depth understanding of if and how journalists in Pakistan are covering conflicts and framing stories that can be identified as peace journalism. This study represents significant contributions to the remarkably limited scholarship on peace and conflict journalism in Pakistan and extends Shabbir Hussain’s work on critical pragmatic perspectives on peace journalism in Pakistan.

Keywords: Afghan refugee repatriation, Critical discourse analysis, Media framing , Peace and conflict journalism

Procedia PDF Downloads 201
1713 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 429
1712 Sustainability and Clustering: A Bibliometric Assessment

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner, David Gabriel F. Barros

Abstract:

Review researches are useful in terms of analysis of research problems. Between the types of review documents, we commonly find bibliometric studies. This type of application often helps the global visualization of a research problem and helps academics worldwide to understand the context of a research area better. In this document, a bibliometric view surrounding clustering techniques and sustainability problems is presented. The authors aimed at which issues mostly use clustering techniques, and, even which sustainability issue would be more impactful on today’s moment of research. During the bibliometric analysis, we found ten different groups of research in clustering applications for sustainability issues: Energy; Environmental; Non-urban planning; Sustainable Development; Sustainable Supply Chain; Transport; Urban Planning; Water; Waste Disposal; and, Others. And, by analyzing the citations of each group, we discovered that the Environmental group could be classified as the most impactful research cluster in the area mentioned. Now, after the content analysis of each paper classified in the environmental group, we found that the k-means technique is preferred for solving sustainability problems with clustering methods since it appeared the most amongst the documents. The authors finally conclude that a bibliometric assessment could help indicate a gap of researches on waste disposal – which was the group with the least amount of publications – and the most impactful research on environmental problems.

Keywords: bibliometric assessment, clustering, sustainability, territorial partitioning

Procedia PDF Downloads 109
1711 COVID-19 Pandemic and Disruptions in Nigeria’s Domestic Economic Activities: A Pre-post Empirical Investigation

Authors: Amaefule, Leonard Ifeanyi

Abstract:

The study evaluated the disruptions in Nigeria’s domestic economic activities occasioned by the COVID-19 pandemic: a pre and post-pandemic investigation approach. Domestic economic activities were measured with composite manufacturing purchasing managers index (PMI) and composite non-manufacturing PMI. Production and employment levels indices were proxies for composite manufacturing PMI, while business activities and employment level indices were proxies for non-manufacturing PMI. Data for these indices were sourced from monthly and quarterly publications of the Central Bank of Nigeria for periods covering fifteen (15) months before and 15 months after the outbreak of the virus in Nigeria. Test of equality of means was employed in establishing the significance of the difference of means between the pre and post-pandemic domestic economic activities. Results from the analysis indicated that a significant negative difference exists in each of the measures of domestic economic activities between the pre and post-pandemic periods. These findings, therefore, offer empirical evidence that the COVID-19 pandemic has disrupted domestic economic activities in Nigeria; thus, it exerts a negative influence on the measures of the nation’s domestic economic activities. The study thus recommended (among other things) that the Nigerian government should focus on policies that would enhance domestic production, employment and enhance business activities.

Keywords: COVID-19, domestic economic activities, composite manufacturing indices, composite non-manufacturing indices

Procedia PDF Downloads 178
1710 Interoperability Maturity Models for Consideration When Using School Management Systems in South Africa: A Scoping Review

Authors: Keneilwe Maremi, Marlien Herselman, Adele Botha

Abstract:

The main purpose and focus of this paper are to determine the Interoperability Maturity Models to consider when using School Management Systems (SMS). The importance of this is to inform and help schools with knowing which Interoperability Maturity Model is best suited for their SMS. To address the purpose, this paper will apply a scoping review to ensure that all aspects are provided. The scoping review will include papers written from 2012-2019 and a comparison of the different types of Interoperability Maturity Models will be discussed in detail, which includes the background information, the levels of interoperability, and area for consideration in each Maturity Model. The literature was obtained from the following databases: IEEE Xplore and Scopus, the following search engines were used: Harzings, and Google Scholar. The topic of the paper was used as a search term for the literature and the term ‘Interoperability Maturity Models’ was used as a keyword. The data were analyzed in terms of the definition of Interoperability, Interoperability Maturity Models, and levels of interoperability. The results provide a table that shows the focus area of concern for each Maturity Model (based on the scoping review where only 24 papers were found to be best suited for the paper out of 740 publications initially identified in the field). This resulted in the most discussed Interoperability Maturity Model for consideration (Information Systems Interoperability Maturity Model (ISIMM) and Organizational Interoperability Maturity Model for C2 (OIM)).

Keywords: interoperability, interoperability maturity model, school management system, scoping review

Procedia PDF Downloads 209
1709 Artificial Intelligence in Disease Diagnosis

Authors: Shalini Tripathi, Pardeep Kumar

Abstract:

The method of translating observed symptoms into disease names is known as disease diagnosis. The ability to solve clinical problems in a complex manner is critical to a doctor's effectiveness in providing health care. The accuracy of his or her expertise is crucial to the survival and well-being of his or her patients. Artificial Intelligence (AI) has a huge economic influence depending on how well it is applied. In the medical sector, human brain-simulated intellect can help not only with classification accuracy, but also with reducing diagnostic time, cost and pain associated with pathologies tests. In light of AI's present and prospective applications in the biomedical, we will identify them in the paper based on potential benefits and risks, social and ethical consequences and issues that might be contentious but have not been thoroughly discussed in publications and literature. Current apps, personal tracking tools, genetic tests and editing programmes, customizable models, web environments, virtual reality (VR) technologies and surgical robotics will all be investigated in this study. While AI holds a lot of potential in medical diagnostics, it is still a very new method, and many clinicians are uncertain about its reliability, specificity and how it can be integrated into clinical practice without jeopardising clinical expertise. To validate their effectiveness, more systemic refinement of these implementations, as well as training of physicians and healthcare facilities on how to effectively incorporate these strategies into clinical practice, will be needed.

Keywords: Artificial Intelligence, medical diagnosis, virtual reality, healthcare ethical implications 

Procedia PDF Downloads 132
1708 Machine Learning Approach for Predicting Students’ Academic Performance and Study Strategies Based on Their Motivation

Authors: Fidelia A. Orji, Julita Vassileva

Abstract:

This research aims to develop machine learning models for students' academic performance and study strategy prediction, which could be generalized to all courses in higher education. Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) used in building the models are chosen based on prior studies, which revealed that the attributes are essential in students’ learning process. Previous studies revealed the individual effects of each of these attributes on students’ learning progress. However, few studies have investigated the combined effect of the attributes in predicting student study strategy and academic performance to reduce the dropout rate. To bridge this gap, we used Scikit-learn in python to build five machine learning models (Decision Tree, K-Nearest Neighbour, Random Forest, Linear/Logistic Regression, and Support Vector Machine) for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the tree-based models such as the random forest (with prediction accuracy of 94.9%) and decision tree show the best results compared to the linear, support vector, and k-nearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adapt/personalize the learning process.

Keywords: classification models, learning strategy, predictive modeling, regression models, student academic performance, student motivation, supervised machine learning

Procedia PDF Downloads 128