Search results for: logistic regression models
8985 The Effect of Accounting Conservatism on Cost of Capital: A Quantile Regression Approach for MENA Countries
Authors: Maha Zouaoui Khalifa, Hakim Ben Othman, Hussaney Khaled
Abstract:
Prior empirical studies have investigated the economic consequences of accounting conservatism by examining its impact on the cost of equity capital (COEC). However, findings are not conclusive. We assume that inconsistent results of such association may be attributed to the regression models used in data analysis. To address this issue, we re-examine the effect of different dimension of accounting conservatism: unconditional conservatism (U_CONS) and conditional conservatism (C_CONS) on the COEC for a sample of listed firms from Middle Eastern and North Africa (MENA) countries, applying quantile regression (QR) approach developed by Koenker and Basset (1978). While classical ordinary least square (OLS) method is widely used in empirical accounting research, however it may produce inefficient and bias estimates in the case of departures from normality or long tail error distribution. QR method is more powerful than OLS to handle this kind of problem. It allows the coefficient on the independent variables to shift across the distribution of the dependent variable whereas OLS method only estimates the conditional mean effects of a response variable. We find as predicted that U_CONS has a significant positive effect on the COEC however, C_CONS has a negative impact. Findings suggest also that the effect of the two dimensions of accounting conservatism differs considerably across COEC quantiles. Comparing results from QR method with those of OLS, this study throws more lights on the association between accounting conservatism and COEC.Keywords: unconditional conservatism, conditional conservatism, cost of equity capital, OLS, quantile regression, emerging markets, MENA countries
Procedia PDF Downloads 3558984 Public Preferences for Lung Cancer Screening in China: A Discrete Choice Experiment
Authors: Zixuan Zhao, Lingbin Du, Le Wang, Youqing Wang, Yi Yang, Jingjun Chen, Hengjin Dong
Abstract:
Objectives: Few results from public attitudes for lung cancer screening are available both in China and abroad. This study aimed to identify preferred lung cancer screening modalities in a Chinese population and predict uptake rates of different modalities. Materials and Methods: A discrete choice experiment questionnaire was administered to 392 Chinese individuals aged 50–74 years who were at high risk for lung cancer. Each choice set had two lung screening options and an option to opt-out, and respondents were asked to choose the most preferred one. Both mixed logit analysis and stepwise logistic analysis were conducted to explore whether preferences were related to respondent characteristics and identify which kinds of respondents were more likely to opt out of any screening. Results: On mixed logit analysis, attributes that were predictive of choice at 1% level of statistical significance included the screening interval, screening venue, and out-of-pocket costs. The preferred screening modality seemed to be screening by low-dose computed tomography (LDCT) + blood test once a year in a general hospital at a cost of RMB 50; this could increase the uptake rate by 0.40 compared to the baseline setting. On stepwise logistic regression, those with no endowment insurance were more likely to opt out; those who were older and housewives/househusbands, and those with a health check habit and with commercial endowment insurance were less likely to opt out from a screening programme. Conclusions: There was considerable variance between real risk and self-perceived risk of lung cancer among respondents, and further research is required in this area. Lung cancer screening uptake can be increased by offering various screening modalities, so as to help policymakers further design the screening modality.Keywords: lung cancer, screening, China., discrete choice experiment
Procedia PDF Downloads 2598983 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 1058982 Delivery System Design of the Local Part to Reduce the Logistic Costs in an Automotive Industry
Authors: Alesandro Romero, Inaki Maulida Hakim
Abstract:
This research was conducted in an automotive company in Indonesia to overcome the problem of high logistics cost. The problem causes high of additional truck delivery. From the breakdown of the problem, chosen one route, which has the highest gap value, namely for RE-04. Research methodology will be started from calculating the ideal condition, making simulation, calculating the ideal logistic cost, and proposing an improvement. From the calculation of the ideal condition, box arrangement was done on the truck; the average efficiency was 97,4 % with three trucks delivery per day. Route simulation making uses Tecnomatix Plant Simulation software as a visualization for the company about how the system is occurred on route RE-04 in ideal condition. Furthermore, from the calculation of logistics cost of the ideal condition, it brings savings of Rp53.011.800,00 in a month. The last step is proposing improvements on the area of route RE-04. The route arrangement is done by Saving Method and sequence of each supplier with the Nearest Neighbor. The results of the proposed improvements are three new route groups, where was expected to decrease logistics cost Rp3.966.559,40 per day, and increase the average of the truck efficiency 8,78% per day.Keywords: efficiency, logistic cost, milkrun, saving methode, simulation
Procedia PDF Downloads 4468981 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 2088980 Constraints on IRS Control: An Alternative Approach to Tax Gap Analysis
Authors: J. T. Manhire
Abstract:
A tax authority wants to take actions it knows will foster the greatest degree of voluntary taxpayer compliance to reduce the “tax gap.” This paper suggests that even if a tax authority could attain a state of complete knowledge, there are constraints on whether and to what extent such actions would result in reducing the macro-level tax gap. These limits are not merely a consequence of finite agency resources. They are inherent in the system itself. To show that this is one possible interpretation of the tax gap data, the paper formulates known results in a different way by analyzing tax compliance as a population with a single covariate. This leads to a standard use of the logistic map to analyze the dynamics of non-compliance growth or decay over a sequence of periods. This formulation gives the same results as the tax gap studies performed over the past fifty years in the U.S. given the published margins of error. Limitations and recommendations for future work are discussed, along with some implications for tax policy.Keywords: income tax, logistic map, tax compliance, tax law
Procedia PDF Downloads 1208979 Effect of Serum Electrolytes on a QTc Interval and Mortality in Patients admitted to Coronary Care Unit
Authors: Thoetchai Peeraphatdit, Peter A. Brady, Suraj Kapa, Samuel J. Asirvatham, Niyada Naksuk
Abstract:
Background: Serum electrolyte abnormalities are a common cause of an acquired prolonged QT syndrome, especially, in the coronary care unit (CCU) setting. Optimal electrolyte ranges among the CCU patients have not been sufficiently investigated. Methods: We identified 8,498 consecutive CCU patients who were admitted to the CCU at Mayo Clinic, Rochester, the USA, from 2004 through 2013. Association between first serum electrolytes and baseline corrected QT intervals (QTc), as well as in-hospital mortality, was tested using multivariate linear regression and logistic regression, respectively. Serum potassium 4.0- < 4.5 mEq/L, ionized calcium (iCa) 4.6-4.8 mg/dL, and magnesium 2.0- < 2.2 mg/dL were used as the reference levels. Results: There was a modest level-dependent relationship between hypokalemia ( < 4.0 mEq/L), hypocalcemia ( < 4.4 mg/dL), and a prolonged QTc interval; serum magnesium did not affect the QTc interval. Association between the serum electrolytes and in-hospital mortality included a U-shaped relationship for serum potassium (adjusted odds ratio (OR) 1.53 and OR 1.91for serum potassium 4.5- < 5.0 and ≥ 5.0 mEq/L, respectively) and an inverted J-shaped relationship for iCa (adjusted OR 2.79 and OR 2.03 for calcium < 4.4 and 4.4- < 4.6 mg/dL, respectively). For serum magnesium, the mortality was greater only among patients with levels ≥ 2.4 mg/dL (adjusted OR 1.40), compared to the reference level. Findings were similar in sensitivity analyses examining the association between mean serum electrolytes and mean QTc intervals, as well as in-hospital mortality. Conclusions: Serum potassium 4.0- < 4.5 mEq/L, iCa ≥ 4.6 mg/dL, and magnesium < 2.4 mg/dL had a neutral effect on QTc intervals and were associated with the lowest in-hospital mortality among the CCU patients.Keywords: calcium, electrocardiography, long-QT syndrome, magnesium, mortality, potassium
Procedia PDF Downloads 3948978 Incidence of Breast Cancer and Enterococcus Infection: A Retrospective Analysis
Authors: Matthew Cardeiro, Amalia D. Ardeljan, Lexi Frankel, Dianela Prado Escobar, Catalina Molnar, Omar M. Rashid
Abstract:
Introduction: Enterococci comprise the natural flora of nearly all animals and are ubiquitous in food manufacturing and probiotics. However, its role in the microbiome remains controversial. The gut microbiome has shown to play an important role in immunology and cancer. Further, recent data has suggested a relationship between gut microbiota and breast cancer. These studies have shown that the gut microbiome of patients with breast cancer differs from that of healthy patients. Research regarding enterococcus infection and its sequala is limited, and further research is needed in order to understand the relationship between infection and cancer. Enterococcus may prevent the development of breast cancer (BC) through complex immunologic and microbiotic adaptations following an enterococcus infection. This study investigated the effect of enterococcus infection and the incidence of BC. Methods: A retrospective study (January 2010- December 2019) was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using a Humans Health Insurance Database. International Classification of Disease (ICD) 9th and 10th codes, Current Procedural Terminology (CPT), and National Drug Codes were used to identify BC diagnosis and enterococcus infection. Patients were matched for age, sex, Charlson Comorbidity Index (CCI), antibiotic treatment, and region of residence. Chi-squared, logistic regression, and odds ratio were implemented to assess the significance and estimate relative risk. Results: 671 out of 28,518 (2.35%) patients with a prior enterococcus infection and 1,459 out of 28,518 (5.12%) patients without enterococcus infection subsequently developed BC, and the difference was statistically significant (p<2.2x10⁻¹⁶). Logistic regression also indicated enterococcus infection was associated with a decreased incidence of BC (RR=0.60, 95% CI [0.57, 0.63]). Treatment for enterococcus infection was analyzed and controlled for in both enterococcus infected and noninfected populations. 398 out of 11,523 (3.34%) patients with a prior enterococcus infection and treated with antibiotics were compared to 624 out of 11,523 (5.41%) patients with no history of enterococcus infection (control) and received antibiotic treatment. Both populations subsequently developed BC. Results remained statistically significant (p<2.2x10-16) with a relative risk of 0.57 (95% CI [0.54, 0.60]). Conclusion & Discussion: This study shows a statistically significant correlation between enterococcus infection and a decrease incidence of breast cancer. Further exploration is needed to identify and understand not only the role of enterococcus in the microbiome but also the protective mechanism(s) and impact enterococcus infection may have on breast cancer development. Ultimately, further research is needed in order to understand the complex and intricate relationship between the microbiome, immunology, bacterial infections, and carcinogenesis.Keywords: breast cancer, enterococcus, immunology, infection, microbiome
Procedia PDF Downloads 1738977 Exploring Students’ Visual Conception of Matter and Its Implications to Teaching and Learning Chemistry
Authors: Allen A. Espinosa, Arlyne C. Marasigan, Janir T. Datukan
Abstract:
The study explored how students visualize the states and classifications of matter using scientific models. It also identified misconceptions of students in using scientific models. In general, high percentage of students was able to use scientific models correctly and only a little misconception was identified. From the result of the study, a teaching framework was formulated wherein scientific models should be employed in classroom instruction to visualize abstract concepts in chemistry and for better conceptual understanding.Keywords: visual conception, scientific models, mental models, states of matter, classification of matter
Procedia PDF Downloads 4008976 Image Compression Based on Regression SVM and Biorthogonal Wavelets
Authors: Zikiou Nadia, Lahdir Mourad, Ameur Soltane
Abstract:
In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.Keywords: image compression, 2D discrete wavelet transform (DWT-2D), support vector regression (SVR), SVM Kernels, run-length, arithmetic coding
Procedia PDF Downloads 3818975 The Impact of Unconditional and Conditional Conservatism on Cost of Equity Capital: A Quantile Regression Approach for MENA Countries
Authors: Khalifa Maha, Ben Othman Hakim, Khaled Hussainey
Abstract:
Prior empirical studies have investigated the economic consequences of accounting conservatism by examining its impact on the cost of equity capital (COEC). However, findings are not conclusive. We assume that inconsistent results of such association may be attributed to the regression models used in data analysis. To address this issue, we re-examine the effect of different dimension of accounting conservatism: unconditional conservatism (U_CONS) and conditional conservatism (C_CONS) on the COEC for a sample of listed firms from Middle Eastern and North Africa (MENA) countries, applying quantile regression (QR) approach developed by Koenker and Basset (1978). While classical ordinary least square (OLS) method is widely used in empirical accounting research, however it may produce inefficient and bias estimates in the case of departures from normality or long tail error distribution. QR method is more powerful than OLS to handle this kind of problem. It allows the coefficient on the independent variables to shift across the distribution of the dependent variable whereas OLS method only estimates the conditional mean effects of a response variable. We find as predicted that U_CONS has a significant positive effect on the COEC however, C_CONS has a negative impact. Findings suggest also that the effect of the two dimensions of accounting conservatism differs considerably across COEC quantiles. Comparing results from QR method with those of OLS, this study throws more lights on the association between accounting conservatism and COEC.Keywords: unconditional conservatism, conditional conservatism, cost of equity capital, OLS, quantile regression, emerging markets, MENA countries
Procedia PDF Downloads 3598974 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model
Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David
Abstract:
The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.Keywords: national development, granite, profitability assessment, ANN models
Procedia PDF Downloads 1018973 Determining Variables in Mathematics Performance According to Gender in Mexican Elementary School
Authors: Nora Gavira Duron, Cinthya Moreda Gonzalez-Ortega, Reyna Susana Garcia Ruiz
Abstract:
This paper objective is to analyze the mathematics performance in the Learning Evaluation National Plan (PLANEA for its Spanish initials: Plan Nacional para la Evaluación de los Aprendizajes), applied to Mexican students who are enrolled in the last elementary-school year over the 2017-2018 academic year. Such test was conducted nationwide in 3,573 schools, using a sample of 108,083 students, whose average in mathematics, on a scale of 0 to 100, was 45.6 points. 75% of the sample analyzed did not reach the sufficiency level (60 points). It should be noted that only 2% got a 90 or higher score result. The performance is analyzed while considering whether there are differences in gender, marginalization level, public or private school enrollment, parents’ academic background, and living-with-parents situation. Likewise, this variable impact (among other variables) on school performance by gender is evaluated, considering multivariate logistic (Logit) regression analysis. The results show there are no significant differences in mathematics performance regarding gender in elementary school; nevertheless, the impact exerted by mothers who studied at least high school is of great relevance for students, particularly for girls. Other determining variables are students’ resilience, their parents’ economic status, and the fact they attend private schools, strengthened by the mother's education.Keywords: multivariate regression analysis, academic performance, learning evaluation, mathematics result per gender
Procedia PDF Downloads 1468972 A Multilevel Analysis of Predictors of Early Antenatal Care Visits among Women of Reproductive Age in Benin: 2017/2018 Benin Demographic and Health Survey
Authors: Ebenezer Kwesi Armah-Ansah, Kenneth Fosu Oteng, Esther Selasi Avinu, Eugene Budu, Edward Kwabena Ameyaw
Abstract:
Background: Maternal mortality, particularly in Benin, is a major public health concern in Sub-Saharan Africa. To provide a positive pregnancy experience and reduce maternal morbidities, all pregnant women must get appropriate and timely prenatal support. However, many pregnant women in developing countries, including Benin, begin antenatal care late. There is a paucity of empirical literature on the prevalence and predictors of early antenatal care visits in Benin. As a result, the purpose of this study is to investigate the prevalence and predictors of early antenatal care visits among women of productive age in Benin. Methods: This is a secondary analysis of the 2017/2018 Benin Demographic and Health Survey (BDHS) data. The study involved 6,919 eligible women. Data analysis was conducted using Stata version 14.2 for Mac OS. We adopted a multilevel logistic regression to examine the predictors of early ANC visits in Benin. The results were presented as odds ratios (ORs) associated with 95% confidence intervals (CIs) and p-value <0.05 to determine the significant associations. Results: The prevalence of early ANC visits among pregnant women in Benin was 57.03% [95% CI: 55.41-58.64]. In the final multilevel logistic regression, early ANC visit was higher among women aged 30-34 [aOR=1.60, 95% CI=1.17-2.18] compared to those aged 15-19, women with primary education [aOR=1.22, 95% CI=1.06-142] compared to the non-educated women, women who were covered by health insurance [aOR=3.03, 95% CI=1.35-6.76], women without a big problem in getting the money needed for treatment [aOR=1.31, 95% CI=1.16-1.49], distance to the health facility, not a big problem [aOR=1.23, 95% CI=1.08-1.41], and women whose partners had secondary/higher education [aOR=1.35, 95% CI=1.15-1.57] compared with those who were not covered by health insurance, had big problem in getting money needed for treatment, distance to health facility is a big problem and whose partners had no education respectively. However, women who had four or more births [aOR=0.60, 95% CI=0.48-0.74] and those in Atacora Region [aOR=0.50, 95% CI=0.37-0.68] had lower odds of early ANC visit. Conclusion: This study revealed a relatively high prevalence of early ANC visits among women of reproductive age in Benin. Women's age, educational status of women and their partners, parity, health insurance coverage, distance to health facilities, and region were all associated with early ANC visits among women of reproductive in Benin. These factors ought to be taken into account when developing ANC policies and strategies in order to boost early ANC visits among women in Benin. This will significantly reduce maternal and newborn mortality and help achieve the World Health Organization’s recommendation that all pregnant women should initiate early ANC visits within the first three months of pregnancy.Keywords: antenatal care, Benin, maternal health, pregnancy, DHS, public health
Procedia PDF Downloads 668971 Higher Consumption of White Rice Increase the Risk of Metabolic Syndrome in Adults with Abdominal Obesity
Authors: Zahra Bahadoran, Parvin Mirmiran, Fereidoun Azizi
Abstract:
Background: Higher consumption of white rice has been suggested as a risk factor for development of metabolic abnormalities. In this study we investigated the association between consumption of white rice and the 3-year occurrence of metabolic syndrome (MetS) in adults with and without abdominal obesity. Methods: This longitudinal study was conducted within the framework of the Tehran Lipid and Glucose Study on 1476 adults, aged 19-70 years. Dietary intakes were measured, using a 168-food items validated semi-quantitative food frequency questionnaire at baseline. Biochemical and anthropometric measurements were evaluated at both baseline (2006-2008) and after 3-year follow-up (2009-2011). MetS and its components were defined according to the diagnostic criteria proposed by NCEP ATP III, and the new cutoff points of waist circumference for Iranian adults. Multiple logistic regression models were used to estimate the occurrence of the MetS in each quartile of white rice consumption. Results: The mean age of participants was 37.8±12.3 y, and mean BMI was 26.0±4.5 kg/m2 at baseline. The prevalence of MetS in subjects with abdominal obesity was significantly higher (40.9 vs. 16.2%, P<0.01). There was no significant difference in white rice consumption between the two groups. Mean daily intake of white rice was 93±59, 209±58, 262±60 and 432±224 g/d, in the first to fourth quartiles of white rice, respectively. Stratified analysis by categories of waist circumference showed that higher consumption of white rice was more strongly related to the risk of metabolic syndrome in participants who had abdominal obesity (OR: 2.34, 95% CI:1.14-4.41 vs. OR:0.99, 95% CI:0.60-1.65) Conclusion: We demonstrated that higher consumption of white rice may be a risk for development of metabolic syndrome in adults with abdominal obesity.Keywords: white rice, abdominal obesity, metabolic syndrome, food science, triglycerides
Procedia PDF Downloads 4468970 Evidence Based Approach on Beliefs and Perceptions on Mental Health Disorder and Substance Abuse: The Role of a Social Worker
Authors: Helena Baffoe
Abstract:
The US has developed numerous programs over the past 50 years to enhance the lives of those who suffer from mental health illnesses and substance abuse, as well as the effectiveness of their treatments. Despite these advances over the past 50 years, there hasn't been a corresponding improvement in American public attitudes and beliefs about mental health disorders and substance abuse. Highly publicized acts of violence frequently elicit comments that blame the perpetrator's perceived mental health disorder since such people are thought to be substance abusers. Despite these strong public beliefs and perception about mental disorder and substance abuse, concreate empirical evidence that entail this perception is lacking, and evidence of their effectiveness has not been integrated. A rich data was collected from Substance Abuse and Mental Health Services Administration (SAMHSA) with a hypothesis that people who are diagnosed with a mental health disorder are likely to be diagnosed with substance abuse using logit regression analysis and Instrumental Variable. It was found that depressive, anxiety, and trauma/stressor mental disorders constitute the most common mental disorder in the United States, and the study could not find statistically significant evidence that being diagnosed with these leading mental health disorders in the United States does necessarily imply that such a patient is diagnosed with substances abuse. Thus, the public has a misconception of mental health and substance abuse issues, and social workers' responsibilities are outlined in order to assist ameliorate this attitude and perception.Keywords: mental health disorder, substance use, empirical evidence, logistic regression
Procedia PDF Downloads 778969 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1258968 The Impact of International Financial Reporting Standards (IFRS) Adoption on Performance’s Measure: A Study of UK Companies
Authors: Javad Izadi, Sahar Majioud
Abstract:
This study presents an approach of assessing the choice of performance measures of companies in the United Kingdom after the application of IFRS in 2005. The aim of this study is to investigate the effects of IFRS on the choice of performance evaluation methods for UK companies. We analyse through an econometric model the relationship of the dependent variable, the firm’s performance, which is a nominal variable with the independent ones. Independent variables are split into two main groups: the first one is the group of accounting-based measures: Earning per share, return on assets and return on equities. The second one is the group of market-based measures: market value of property plant and equipment, research and development, sales growth, market to book value, leverage, segment and size of companies. Concerning the regression used, it is a multinomial logistic regression performed on a sample of 130 UK listed companies. Our finding shows after IFRS adoption, and companies give more importance to some variables such as return on equities and sales growth to assess their performance, whereas the return on assets and market to book value ratio does not have as much importance as before IFRS in evaluating the performance of companies. Also, there are some variables that have no impact on the performance measures anymore, such as earning per share. This article finding is empirically important for business in subjects related to IFRS and companies’ performance measurement.Keywords: performance’s Measure, nominal variable, econometric model, evaluation methods
Procedia PDF Downloads 1388967 Predicting Bridge Pier Scour Depth with SVM
Authors: Arun Goel
Abstract:
Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper, attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly and Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly and Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicates the improvement in the performance of SVM (Poly and Rbf) in comparison to dimensional form of scour.Keywords: modeling, pier scour, regression, prediction, SVM (Poly and Rbf kernels)
Procedia PDF Downloads 4518966 Preserving Privacy in Workflow Delegation Models
Authors: Noha Nagy, Hoda Mokhtar, Mohamed El Sherkawi
Abstract:
The popularity of workflow delegation models and the increasing number of workflow provenance-aware systems motivate the need for finding more strict delegation models. Such models combine different approaches for enhanced security and respecting workflow privacy. Although modern enterprises seek conformance to workflow constraints to ensure correctness of their work, these constraints pose a threat to security, because these constraints can be good seeds for attacking privacy even in secure models. This paper introduces a comprehensive Workflow Delegation Model (WFDM) that utilizes provenance and workflow constraints to prevent malicious delegate from attacking workflow privacy as well as extending the delegation functionalities. In addition, we argue the need for exploiting workflow constraints to improve workflow security models.Keywords: workflow delegation models, secure workflow, workflow privacy, workflow provenance
Procedia PDF Downloads 3318965 Effect of Atrial Flutter on Alcoholic Cardiomyopathy
Authors: Ibrahim Ahmed, Richard Amoateng, Akhil Jain, Mohamed Ahmed
Abstract:
Alcoholic cardiomyopathy (ACM) is a type of acquired cardiomyopathy caused by chronic alcohol consumption. Frequently ACM is associated with arrhythmias such as atrial flutter. Our aim was to characterize the patient demographics and investigate the effect of atrial flutter (AF) on ACM. This was a retrospective cohort study using the Nationwide Inpatient Sample database to identify admissions in adults with principal and secondary diagnoses of alcoholic cardiomyopathy and atrial flutter from 2019. Multivariate linear and logistic regression models were adjusted for age, gender, race, household income, insurance status, Elixhauser comorbidity score, hospital location, bed size, and teaching status. The primary outcome was all-cause mortality, and secondary outcomes were the length of stay (LOS) and total charge in USD. There was a total of 21,855 admissions with alcoholic cardiomyopathy, of which 1,635 had atrial flutter (AF-ACM). Compared to Non-AF-ACM cohort, AF-ACM cohort had fewer females (4.89% vs 14.54%, p<0.001), were older (58.66 vs 56.13 years, p<0.001), fewer Native Americans (0.61% vs2.67%, p<0.01), had fewer smaller (19.27% vs 22.45%, p<0.01) & medium-sized hospitals (23.24% vs28.98%, p<0.01), but more large-sized hospitals (57.49% vs 48.57%, p<0.01), more Medicare (40.37% vs 34.08%, p<0.05) and fewer Medicaid insured (23.55% vs 33.70%, p=<0.001), fewer hypertension (10.7% vs 15.01%, p<0.05), and more obesity (24.77% vs 16.35%, p<0.001). Compared to Non-AF-ACM cohort, there was no difference in AF-ACM cohort mortality rate (6.13% vs 4.20%, p=0.0998), unadjusted mortality OR 1.49 (95% CI 0.92-2.40, p=0.102), adjusted mortality OR 1.36 (95% CI 0.83-2.24, p=0.221), but there was a difference in LOS 1.23 days (95% CI 0.34-2.13, p<0.01), total charge $28,860.30 (95% CI 11,883.96-45,836.60, p<0.01). In patients admitted with ACM, the presence of AF was not associated with a higher all-cause mortality rate or odds of all-cause mortality; however, it was associated with 1.23 days increase in LOS and a $28,860.30 increase in total hospitalization charge. Native Americans, older age and obesity were risk factors for the presence of AF in ACM.Keywords: alcoholic cardiomyopathy, atrial flutter, cardiomyopathy, arrhythmia
Procedia PDF Downloads 1128964 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation
Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang
Abstract:
In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.Keywords: building energy model, simulation, geometric simplification, design, regression
Procedia PDF Downloads 1808963 Prediction of Bariatric Surgery Publications by Using Different Machine Learning Algorithms
Authors: Senol Dogan, Gunay Karli
Abstract:
Identification of relevant publications based on a Medline query is time-consuming and error-prone. An all based process has the potential to solve this problem without any manual work. To the best of our knowledge, our study is the first to investigate the ability of machine learning to identify relevant articles accurately. 5 different machine learning algorithms were tested using 23 predictors based on several metadata fields attached to publications. We find that the Boosted model is the best-performing algorithm and its overall accuracy is 96%. In addition, specificity and sensitivity of the algorithm is 97 and 93%, respectively. As a result of the work, we understood that we can apply the same procedure to understand cancer gene expression big data.Keywords: prediction of publications, machine learning, algorithms, bariatric surgery, comparison of algorithms, boosted, tree, logistic regression, ANN model
Procedia PDF Downloads 2098962 Learning Dynamic Representations of Nodes in Temporally Variant Graphs
Authors: Sandra Mitrovic, Gaurav Singh
Abstract:
In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.Keywords: churn prediction, dynamic networks, node2vec, auto-encoders
Procedia PDF Downloads 3148961 Statistical Model of Water Quality in Estero El Macho, Machala-El Oro
Authors: Rafael Zhindon Almeida
Abstract:
Surface water quality is an important concern for the evaluation and prediction of water quality conditions. The objective of this study is to develop a statistical model that can accurately predict the water quality of the El Macho estuary in the city of Machala, El Oro province. The methodology employed in this study is of a basic type that involves a thorough search for theoretical foundations to improve the understanding of statistical modeling for water quality analysis. The research design is correlational, using a multivariate statistical model involving multiple linear regression and principal component analysis. The results indicate that water quality parameters such as fecal coliforms, biochemical oxygen demand, chemical oxygen demand, iron and dissolved oxygen exceed the allowable limits. The water of the El Macho estuary is determined to be below the required water quality criteria. The multiple linear regression model, based on chemical oxygen demand and total dissolved solids, explains 99.9% of the variance of the dependent variable. In addition, principal component analysis shows that the model has an explanatory power of 86.242%. The study successfully developed a statistical model to evaluate the water quality of the El Macho estuary. The estuary did not meet the water quality criteria, with several parameters exceeding the allowable limits. The multiple linear regression model and principal component analysis provide valuable information on the relationship between the various water quality parameters. The findings of the study emphasize the need for immediate action to improve the water quality of the El Macho estuary to ensure the preservation and protection of this valuable natural resource.Keywords: statistical modeling, water quality, multiple linear regression, principal components, statistical models
Procedia PDF Downloads 988960 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 4408959 Analysis of the Savings Behaviour of Rice Farmers in Tiaong, Quezon, Philippines
Authors: Angelika Kris D. Dalangin, Cesar B. Quicoy
Abstract:
Rice farming is a major source of livelihood and employment in the Philippines, but it requires a substantial amount of capital. Capital may come from income (farm, non-farm, and off-farm), savings and credit. However, rice farmers suffer from lack of capital due to high costs of inputs and low productivity. Capital insufficiency, coupled with low productivity, hindered them to meet their basic household and production needs. Hence, they resorted to borrowing money, mostly from informal lenders who charge very high interest rates. As another source of capital, savings can help rice farmers meet their basic needs for both the household and the farm. However, information is inadequate whether the farmers save or not, as well as, why they do not depend on savings to augment their lack of capital. Thus, it is worth analyzing how rice farmers saved. The study revealed, using the actual savings which is the difference between the household income and expenditure, that about three-fourths (72%) of the total number of farmers interviewed are savers. However, when they were asked whether they are savers or not, more than half of them considered themselves as non-savers. This gap shows that there are many farmers who think that they do not have savings at all; hence they continue to borrow money and do not depend on savings to augment their lack of capital. The study also identified the forms of savings, saving motives, and savings utilization among rice farmers. Results revealed that, for the past 12 months, most of the farmers saved cash at home for liquidity purposes while others deposited cash in banks and/or saved their money in the form of livestock. Among the most important reasons of farmers for saving are for daily household expenses, for building a house, for emergency purposes, for retirement, and for their next production. Furthermore, the study assessed the factors affecting the rice farmers’ savings behaviour using logistic regression. Results showed that the factors found to be significant were presence of non-farm income, per capita net farm income, and per capita household expense. The presence of non-farm income and per capita net farm income positively affects the farmers’ savings behaviour. On the other hand, per capita household expenses have negative effect. The effect, however, of per capita net farm income and household expenses is very negligible because of the very small chance that the farmer is a saver. Generally, income and expenditure were proved to be significant factors that affect the savings behaviour of the rice farmers. However, most farmers could not save regularly due to low farm income and high household and farm expenditures. Thus, it is highly recommended that government should develop programs or implement policies that will create more jobs for the farmers and their family members. In addition, programs and policies should be implemented to increase farm productivity and income.Keywords: agricultural economics, agricultural finance, binary logistic regression, logit, Philippines, Quezon, rice farmers, savings, savings behaviour
Procedia PDF Downloads 2288958 Effects of Polyvictimization in Suicidal Ideation among Children and Adolescents in Chile
Authors: Oscar E. Cariceo
Abstract:
In Chile, there is a lack of evidence about the impact of polyvictimization on the emergence of suicidal thoughts among children and young people. Thus, this study aims to explore the association between the episodes of polyvictimization suffered by Chilean children and young people and the manifestation of signs related to suicidal tendencies. To achieve this purpose, secondary data from the First Polyvictimization Survey on Children and Adolescents of 2017 were analyzed, and a binomial logistic regression model was applied to establish the probability that young people are experiencing suicidal ideation episodes. The main findings show that women between the ages of 13 and 15 years, who are in seventh grade and second in subsidized schools, are more likely to express suicidal ideas, which increases if they have suffered different types of victimization, particularly physical violence, psychological aggression, and sexual abuse.Keywords: Chile, polyvictimization, suicidal ideation, youth
Procedia PDF Downloads 1788957 A Method to Saturation Modeling of Synchronous Machines in d-q Axes
Authors: Mohamed Arbi Khlifi, Badr M. Alshammari
Abstract:
This paper discusses the general methods to saturation in the steady-state, two axis (d & q) frame models of synchronous machines. In particular, the important role of the magnetic coupling between the d-q axes (cross-magnetizing phenomenon), is demonstrated. For that purpose, distinct methods of saturation modeling of dumper synchronous machine with cross-saturation are identified, and detailed models synthesis in d-q axes. A number of models are given in the final developed form. The procedure and the novel models are verified by a critical application to prove the validity of the method and the equivalence between all developed models is reported. Advantages of some of the models over the existing ones and their applicability are discussed.Keywords: cross-magnetizing, models synthesis, synchronous machine, saturated modeling, state-space vectors
Procedia PDF Downloads 4548956 Impact of Social Transfers on Energy Poverty in Turkey
Authors: Julide Yildirim, Nadir Ocal
Abstract:
Even though there are many studies investigating the extent and determinants of poverty, there is paucity of research investigating the issue of energy poverty in Turkey. The aim of this paper is threefold: First to investigate the extend of energy poverty in Turkey by using Household Budget Survey datasets belonging to 2005 - 2016 period. Second, to examine the risk factors for energy poverty. Finally, to assess the impact of social assistance program participation on energy poverty. Existing literature employs alternative methods to measure energy poverty. In this study energy poverty is measured by employing expenditure approach, where people are considered as energy poor if they disburse more than 10 per cent of their income to meet their energy requirements. Empirical results indicate that energy poverty rate is around 20 per cent during the time period under consideration. Since Household Budget Survey panel data is not available for 2005 - 2016 period, a pseudo panel has been constructed. Panel logistic regression method is utilized to determine the risk factors for energy poverty. The empirical results demonstrate that there is a statistically significant impact of work status and education level on energy poverty likelihood. In the final part of the paper the impact of social transfers on energy poverty has been examined by utilizing panel biprobit model, where social transfer participation and energy poverty incidences are jointly modeled. The empirical findings indicate that social transfer program participation reduces energy poverty. The negative association between energy poverty and social transfer program participation is more pronounced in urban areas compared with the rural areas.Keywords: energy poverty, social transfers, panel data models, Turkey
Procedia PDF Downloads 141