Search results for: transition regression model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19634

Search results for: transition regression model

19214 Choosing between the Regression Correlation, the Rank Correlation, and the Correlation Curve

Authors: Roger L. Goodwin

Abstract:

This paper presents a rank correlation curve. The traditional correlation coefficient is valid for both continuous variables and for integer variables using rank statistics. Since the correlation coefficient has already been established in rank statistics by Spearman, such a calculation can be extended to the correlation curve. This paper presents two survey questions. The survey collected non-continuous variables. We will show weak to moderate correlation. Obviously, one question has a negative effect on the other. A review of the qualitative literature can answer which question and why. The rank correlation curve shows which collection of responses has a positive slope and which collection of responses has a negative slope. Such information is unavailable from the flat, "first-glance" correlation statistics.

Keywords: Bayesian estimation, regression model, rank statistics, correlation, correlation curve

Procedia PDF Downloads 440
19213 Estimating the Life-Distribution Parameters of Weibull-Life PV Systems Utilizing Non-Parametric Analysis

Authors: Saleem Z. Ramadan

Abstract:

In this paper, a model is proposed to determine the life distribution parameters of the useful life region for the PV system utilizing a combination of non-parametric and linear regression analysis for the failure data of these systems. Results showed that this method is dependable for analyzing failure time data for such reliable systems when the data is scarce.

Keywords: masking, bathtub model, reliability, non-parametric analysis, useful life

Procedia PDF Downloads 541
19212 Analysis of Effect of Microfinance on the Profit Level of Small and Medium Scale Enterprises in Lagos State, Nigeria

Authors: Saheed Olakunle Sanusi, Israel Ajibade Adedeji

Abstract:

The study analysed the effect of microfinance on the profit level of small and medium scale enterprises in Lagos. The data for the study were obtained by simple random sampling, and total of one hundred and fifty (150) small and medium scale enterprises (SMEs) were sampled for the study. Seventy-five (75) each are microfinance users and non-users. Data were analysed using descriptive statistics, logit model, t-test and ordinary least square (OLS) regression. The mean profit of the enterprises using microfinance is ₦16.8m, while for the non-users of microfinance is ₦5.9m. The mean profit of microfinance users is statistically different from the non-users. The result of the logit model specified for the determinant of access to microfinance showed that three of specified variables- educational status of the enterprise head, credit utilisation and volume of business investment are significant at P < 0.01. Enterprises with many years of experience, highly educated enterprise heads and high volume of business investment have more potential access to microfinance. The OLS regression model indicated that three parameters namely number of school years, the volume of business investment and (dummy) participation in microfinance were found to be significant at P < 0.05. These variables are therefore significant determinants of impacts of microfinance on profit level in the study area. The study, therefore, concludes and recommends that to improve the status of small and medium scale enterprises for an increase in profit, the full benefit of access to microfinance can be enhanced through investment in social infrastructure and human capital development. Also, concerted efforts should be made to encouraged non-users of microfinance among SMEs to use it in order to boost their profit.

Keywords: credit utilisation, logit model, microfinance, small and medium enterprises

Procedia PDF Downloads 183
19211 Electronic/Optoelectronic Property Tuning in Two-Dimensional Transition Metal Dichalcogenides via High Pressure

Authors: Juan Xia, Jiaxu Yan, Ze Xiang Shen

Abstract:

The tuneable interlayer interactions in two-dimensional (2D) transition metal dichlcogenides (TMDs) offer an exciting platform for exploring new physics and applications by material variety, thickness, stacking sequence, electromagnetic filed, and stress/strain. Compared with the five methods mentioned above, high pressure is a clean and powerful tool to induce dramatic changes in lattice parameters and physical properties for 2D TMD materials. For instance, high pressure can strengthen the van der Waals interactions along c-axis and shorten the covalent bonds in atomic plane, leading to the typical first-order structural transition (2Hc to 2Ha for MoS2), or metallization. In particular, in the case of WTe₂, its unique symmetry endows the significant anisotropy and the corresponding unexpected properties including the giant magnetoresistance, pressure-induced superconductivity and Weyl semimetal states. Upon increasing pressure, the Raman peaks for WTe₂ at ~120 cm⁻¹, are gradually red-shifted and totally suppressed above 10 GPa, attributed to the possible structural instability of orthorhombic Td phase under high pressure and phase transition to a new monoclinic T' phase with inversion symmetry. Distinct electronic structures near Fermi level between the Td and T' phases may pave a feasible way to achieve the Weyl state tuning in one material without doping.

Keywords: 2D TMDs, electronic property, high pressure, first-principles calculations

Procedia PDF Downloads 210
19210 [Keynote Talk]: The Challenges and Solutions for Developing Mobile Apps in a Small University

Authors: Greg Turner, Bin Lu, Cheer-Sun Yang

Abstract:

As computing technology advances, smartphone applications can assist in student learning in a pervasive way. For example, the idea of using a mobile apps for the PA Common Trees, Pests, Pathogens, in the field as a reference tool allows middle school students to learn about trees and associated pests/pathogens without bringing a textbook. In the past, some researches study the mobile software Mobile Application Software Development Life Cycle (MADLC) including traditional models such as the waterfall model, or more recent Agile Methods. Others study the issues related to the software development process. Very little research is on the development of three heterogenous mobile systems simultaneously in a small university where the availability of developers is an issue. In this paper, we propose to use a hybride model of Waterfall Model and the Agile Model, known as the Relay Race Methodology (RRM) in practice, to reflect the concept of racing and relaying for scheduling. Based on the development project, we observe that the modeling of the transition between any two phases is manifested naturally. Thus, we claim that the RRM model can provide a de fecto rather than a de jure basis for the core concept in the MADLC. In this paper, the background of the project is introduced first. Then, the challenges are pointed out followed by our solutions. Finally, the experiences learned and the future work are presented.

Keywords: agile methods, mobile apps, software process model, waterfall model

Procedia PDF Downloads 389
19209 Prediction on Housing Price Based on Deep Learning

Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang

Abstract:

In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.

Keywords: deep learning, convolutional neural network, LSTM, housing prediction

Procedia PDF Downloads 285
19208 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 452
19207 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 274
19206 A Novel Approach towards Test Case Prioritization Technique

Authors: Kamna Solanki, Yudhvir Singh, Sandeep Dalal

Abstract:

Software testing is a time and cost intensive process. A scrutiny of the code and rigorous testing is required to identify and rectify the putative bugs. The process of bug identification and its consequent correction is continuous in nature and often some of the bugs are removed after the software has been launched in the market. This process of code validation of the altered software during the maintenance phase is termed as Regression testing. Regression testing ubiquitously considers resource constraints; therefore, the deduction of an appropriate set of test cases, from the ensemble of the entire gamut of test cases, is a critical issue for regression test planning. This paper presents a novel method for designing a suitable prioritization process to optimize fault detection rate and performance of regression test on predefined constraints. The proposed method for test case prioritization m-ACO alters the food source selection criteria of natural ants and is basically a modified version of Ant Colony Optimization (ACO). The proposed m-ACO approach has been coded in 'Perl' language and results are validated using three examples by computation of Average Percentage of Faults Detected (APFD) metric.

Keywords: regression testing, software testing, test case prioritization, test suite optimization

Procedia PDF Downloads 312
19205 Ultrahigh Thermal Stability of Dielectric Permittivity in 0.6Bi(Mg₁/₂Ti₁/₂)O₃-0.4Ba₀.₈Ca₀.₂(Ti₀.₈₇₅Nb₀.₁₂₅)O₃

Authors: Kaiyuan Chena, Senentxu Lanceros-Méndeza, Laijun Liub, Qi Zhanga

Abstract:

0.6Bi(Mg1/2Ti1/2)O3-0.4Ba0.8Ca0.2(Nb0.125Ti0.875)O3 (0.6BMT-0.4BCNT) ceramics with a pseudo-cubic structure and re-entrant dipole glass behavior have been investigated via X-ray diffraction and dielectric permittivity-temperature spectra. It shows an excellent dielectric-temperature stability with small variations of dielectric permittivity (± 5%, 420 - 802 K) and dielectric loss tangent (tanδ < 2.5%, 441 - 647 K) in a wide temperature range. Three dielectric anomalies are observed from 290 K to 1050 K. The low-temperature weakly coupled re-entrant relaxor behavior was described using Vogel-Fulcher law and the new glass model. The mid- and high-temperature dielectric anomalies are characterized by isothermal impedance and electrical modulus. The activation energy of both dielectric relaxation and conductivity follows the Arrhenius law in the temperature ranges of 633 - 753 K and 833 - 973 K, respectively. The ultrahigh thermal stability of the dielectric permittivity is attributed to the weakly coupling of polar clusters, the formation of diffuse phase transition (DPT) and the local phase transition of calcium-containing perovskite.

Keywords: permittivity, relaxor, electronic ceramics, activation energy

Procedia PDF Downloads 75
19204 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility

Procedia PDF Downloads 106
19203 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 115
19202 Examining the Cognitive Abilities and Financial Literacy Among Street Entrepreneurs: Evidence From North-East, India

Authors: Aayushi Lyngwa, Bimal Kishore Sahoo

Abstract:

The study discusses the relationship between cognitive ability and the level of education attained by the tribal street entrepreneurs on their financial literacy. It is driven by the objective of examining the effect of cognitive ability on financial ability on the one hand and determining the effect of the same on financial literacy on the other. A field experiment was conducted on 203 tribal street vendors in the north-eastern Indian state of Mizoram. This experiment's calculations are conditioned by providing each question scores like math score (cognitive ability), financial score and debt score (financial ability). After that, categories for each of the variables, like math category (math score), financial category (financial score) and debt category (debt score), are generated to run the regression model. Since the dependent variable is ordinal, an ordered logit regression model was applied. The study shows that street vendors' cognitive and financial abilities are highly correlated. It, therefore, confirms that cognitive ability positively affects the financial literacy of street vendors through the increase in attainment of educational levels. It is also found that concerning the type of street vendors, regular street vendors are more likely to have better cognitive abilities than temporary street vendors. Additionally, street vendors with more cognitive and financial abilities gained better monthly profits and performed habits of bookkeeping. The study attempts to draw a particular focus on a set-up which is economically and socially marginalized in the Indian economy. Its finding contributes to understanding financial literacy in an understudied area and provides policy implications through inclusive financial systems solutions in an economy limited to tribal street vendors.

Keywords: financial literacy, education, street entrepreneurs, tribals, cognitive ability, financial ability, ordered logit regression.

Procedia PDF Downloads 77
19201 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 52
19200 Transition Metal Carbodiimide vs. Spinel Matrices for Photocatalytic Water Oxidation

Authors: Karla Lienau, Rafael Müller, René Moré, Debora Ressnig, Dan Cook, Richard Walton, Greta R. Patzke

Abstract:

The increasing demand for renewable energy sources and storable fuels underscores the high potential of artificial photosynthesis. The four electron transfer process of water oxidation remains the bottleneck of water splitting, so that special emphasis is placed on the development of economic, stable and efficient water oxidation catalysts (WOCs). Our investigations introduced cobalt carbodiimide CoNCN and its transition metal analogues as WOC types, and further studies are focused on the interaction of different transition metals in the convenient all-nitrogen/carbon matrix. This provides further insights into the nature of the ‘true catalyst’ for cobalt centers in this non-oxide environment. Water oxidation activity is evaluated with complementary methods, namely photocatalytically using a Ru-dye sensitized standard setup as well as electrocatalytically, via immobilization of the WOCs on glassy carbon electrodes. To further explore the tuning potential of transition metal combinations, complementary investigations were carried out in oxidic spinel WOC matrices with more versatile host options than the carbodiimide framework. The influence of the preparative history on the WOC performance was evaluated with different synthetic methods (e.g. hydrothermally or microwave assisted). Moreover, the growth mechanism of nanoscale Co3O4-spinel as a benchmark WOC was investigated with in-situ PXRD techniques.

Keywords: carbodiimide, photocatalysis, spinels, water oxidation

Procedia PDF Downloads 267
19199 The Synthesis, Structure and Catalytic Activity of Iron(II) Complex with New N2O2 Donor Schiff Base Ligand

Authors: Neslihan Beyazit, Sahin Bayraktar, Cahit Demetgul

Abstract:

Transition metal ions have an important role in biochemistry and biomimetic systems and may provide the basis of models for active sites of biological targets. The presence of copper(II), iron(II) and zinc(II) is crucial in many biological processes. Tetradentate N2O2 donor Schiff base ligands are well known to form stable transition metal complexes and these complexes have also applications in clinical and analytical fields. In this study, we present salient structural features and the details of cathecholase activity of Fe(II) complex of a new Schiff Base ligand. A new asymmetrical N2O2 donor Schiff base ligand and its Fe(II) complex were synthesized by condensation of 4-nitro-1,2 phenylenediamine with 6-formyl-7-hydroxy-5-methoxy-2-methylbenzopyran-4-one and by using an appropriate Fe(II) salt, respectively. Schiff base ligand and its metal complex were characterized by using FT-IR, 1H NMR, 13C NMR, UV-Vis, elemental analysis and magnetic susceptibility. In order to determine the kinetics parameters of catechol oxidase-like activity of Schiff base Fe(II) complex, the oxidation of the 3,5-di-tert-butylcatechol (3,5-DTBC) was measured at 25°C by monitoring the increase of the absorption band at 390-400 nm of the product 3,5-di-tert-butylcatequinone (3,5-DTBQ). The compatibility of catalytic reaction with Michaelis-Menten kinetics also investigated by the method of initial rates by monitoring the growth of the 390–400 nm band of 3,5-DTBQ as a function of time. Kinetic studies showed that Fe(II) complex of the new N2O2 donor Schiff base ligand was capable of acting as a model compound for simulating the catecholase properties of type-3 copper proteins.

Keywords: catecholase activity, Michaelis-Menten kinetics, Schiff base, transition metals

Procedia PDF Downloads 371
19198 Phase Transition in Iron Storage Protein Ferritin

Authors: Navneet Kaur, S. D. Tiwari

Abstract:

Ferritin is a protein which present in the blood of mammals. It maintains the need of iron inside the body. It has an antiferromagnetic iron core, 7-8 nm in size, which is encapsulated inside a protein cage. The thickness of this protein shell is about 2-3 nm. This protein shell reduces the interaction among particles and make ferritin a model superparamagnet. The major composition of ferritin core is mineral ferrihydrite. The molecular formula of ferritin core is (FeOOH)8[FeOOPO3H2]. In this study, we discuss the phase transition of ferritin. We characterized ferritin using x-ray diffractometer, transmission electron micrograph, thermogravimetric analyzer and vibrating sample magnetometer. It is found that ferritin core is amorphous in nature with average particle size of 8 nm. The thermogravimetric and differential thermogravimetric analysis curves shows mass loss at different temperatures. We heated ferritin at these temperatures. It is found that ferritin core starts decomposing after 390^o C. At 1020^o C, the ferritin core is finally converted to alpha phase of iron oxide. Magnetization behavior of final sample clearly shows the iron oxyhydroxide core is completely converted to alpha iron oxide.

Keywords: Antiferromagnetic, Ferritin, Phase, Superparamagnetic

Procedia PDF Downloads 94
19197 Educational Data Mining: The Case of the Department of Mathematics and Computing in the Period 2009-2018

Authors: Mário Ernesto Sitoe, Orlando Zacarias

Abstract:

University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.

Keywords: evasion and retention, cross-validation, bagging, stacking

Procedia PDF Downloads 60
19196 Investigating the Influence of the Ferro Alloys Consumption on the Slab Product Standard Cost with Different Grades Using Regression Analysis (A Case Study of Iran's Iron and Steel Industry)

Authors: Iman Fakhrian, Ali Salehi Manzari

Abstract:

Consistent Profitability is one of the most important priorities in manufacturing companies. One of the fundamental factors for increasing the companies profitability is cost management. Isfahan's mobarakeh steel company is one of the largest producers of the slab product grades in the middle east. Raw material cost constitutes about 70% of the company's expenditures. The costs of the ferro alloys have a remarkable contribution of the raw material costs. This research aims to determine the ferro alloys which have significant effect on the variability of the standard cost of the slab product grades. Used data in this study were collected from standard costing system of isfahan's mobarakeh steel company in 2022. The results of conducting the regression analysis model show that expense items: 03020, 03045, 03125, 03130 and 03150 have dominant role in variability of the standard cost of the slab product grades. In other words, the mentioned ferro alloys have noticeable and significant role in variability of the standard cost of the slab product grades.

Keywords: consistent profitability, ferro alloys, slab product grades, regression analysis

Procedia PDF Downloads 47
19195 Energy Transition in the Netherlands - the Best Way to Motivate Citizens

Authors: Nayden Takev, Remy van Leeuwen, Shiva Chotoe, Hani Alers, Xiao Peng

Abstract:

Citizens, businesses, and public authorities all around the world are becoming aware of the impact that they have on the environment. Currently, climate change is an apparent cause to urge everyone to act and move to sustainable energy solutions. After the Paris Climate Agreement, every country has thought of a way to cut down carbon emissions. The Netherlands formulated the National Climate Agreement. “The government’s central goal with the National Climate Agreement is to reduce greenhouse gas emissions in the Netherlands by 49% compared to 1990 levels. At a European level, the government is advocating a 55% reduction of greenhouse gas emissions by 2030.” [5]. From a survey of the CBS, it is apparent that citizens are not putting in as much effort into the transition to sustainable energy as the government would like them to. After analysing the data, it became clear that the citizens miss the motivation to switch to sustainable energy because they do not believe it is urgent at this point and it is too expensive for them [2]. This needs to be changed. The citizens need to be aware of their impact on the climate and the advantages that this process will bring them. For example, the implementation of smart home displays 4 for real time energy measuring will give the citizens an overview of their energy usage so they are aware of the impact they have. Researchers have also found that the citizens must be included in the decision-making aimed at changing their behaviour [4, 3, 1]. In the future, the government will need to include the citizens when they create campaigns, strategies or introduce new policies [7, 6]. By including and informing the citizens about the policies it will be more attractive for them to choose sustainable energy. However, is all of this enough to motivate the citizens towards energy transition? Or are there other and better ways to do it?

Keywords: Awereness, Energy Transition, Netherlands, citizens

Procedia PDF Downloads 44
19194 Measuring Learning Independence and Transition through the First Year in Architecture

Authors: Duaa Al Maani, Andrew Roberts

Abstract:

Students in higher education are expected to learn actively and independently. Whilst quite work has been done to understand the perceptions of students’ learning transition regarding independent learning, to author’s best knowledge, it seems relatively few published research on independent learning in studio-based subjects such as architecture. Another major issue in independent learning research concerned the inconsistency in terminology; there appears to be a paucity of research on its definition, challenges, and tools within the UK university sector. It is not always clear how independent learning works in practice, or what are the challenges that face students toward being independent learners. Accordingly, this paper seeks to highlight these problems by analyzing previous and current literature of independent learning, in addition, to measure students’ independence at the very begging of their first academic year and compare it with their level of learning independence at the end of the same year. Eighty-seven student enrolled in 2017/2018 at Cardiff University completed the Autonomous Learning Questionnaire in order to measure their level of learning independence. Students’ initial responses were very positive and showed high level of learning independence. Interestingly, these responses significantly decreased at the end of the year. Time management was the most obvious challenge facing students transition into higher education, and contrary to expectations, we found no effect of student maturity on their level of independence. Moreover, we found no significant differences among students’ gender, but we did find differences among nationalities.

Keywords: autonomous learning, first year, learning independence, transition

Procedia PDF Downloads 123
19193 Developing and Evaluating Clinical Risk Prediction Models for Coronary Artery Bypass Graft Surgery

Authors: Mohammadreza Mohebbi, Masoumeh Sanagou

Abstract:

The ability to predict clinical outcomes is of great importance to physicians and clinicians. A number of different methods have been used in an effort to accurately predict these outcomes. These methods include the development of scoring systems based on multivariate statistical modelling, and models involving the use of classification and regression trees. The process usually consists of two consecutive phases, namely model development and external validation. The model development phase consists of building a multivariate model and evaluating its predictive performance by examining calibration and discrimination, and internal validation. External validation tests the predictive performance of a model by assessing its calibration and discrimination in different but plausibly related patients. A motivate example focuses on prediction modeling using a sample of patients undergone coronary artery bypass graft (CABG) has been used for illustrative purpose and a set of primary considerations for evaluating prediction model studies using specific quality indicators as criteria to help stakeholders evaluate the quality of a prediction model study has been proposed.

Keywords: clinical prediction models, clinical decision rule, prognosis, external validation, model calibration, biostatistics

Procedia PDF Downloads 276
19192 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 21
19191 A Predictive Machine Learning Model of the Survival of Female-led and Co-Led Small and Medium Enterprises in the UK

Authors: Mais Khader, Xingjie Wei

Abstract:

This research sheds light on female entrepreneurs by providing new insights on the survival predictions of companies led by females in the UK. This study aims to build a predictive machine learning model of the survival of female-led & co-led small & medium enterprises (SMEs) in the UK over the period 2000-2020. The predictive model built utilised a combination of financial and non-financial features related to both companies and their directors to predict SMEs' survival. These features were studied in terms of their contribution to the resultant predictive model. Five machine learning models are used in the modelling: Decision tree, AdaBoost, Naïve Bayes, Logistic regression and SVM. The AdaBoost model had the highest performance of the five models, with an accuracy of 73% and an AUC of 80%. The results show high feature importance in predicting companies' survival for company size, management experience, financial performance, industry, region, and females' percentage in management.

Keywords: company survival, entrepreneurship, females, machine learning, SMEs

Procedia PDF Downloads 72
19190 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue

Authors: Rachel Y. Zhang, Christopher K. Anderson

Abstract:

A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.

Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine

Procedia PDF Downloads 114
19189 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction

Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian

Abstract:

Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.

Keywords: acute MI, mortality, heart failure, arrhythmia

Procedia PDF Downloads 109
19188 Mott Transition in the VO2/LSCO Heterojunction

Authors: Yi Hu, Chun-Chi Lin, Shau-En Yeh, Shin Lee

Abstract:

In this study, p–n heterojunctions with La0.5Sr0.5CoO3 (LSCO) and W-doped VO2 thin films were fabricated by the radio frequency (r.f.) magnetron sputtering technique and sol-gel process, respectively. The thickness of VO2 and LSCO thin films are about 40 nm and 400 nm, respectively. Good crystalline match between LSCO and VO2 films was observed from the SEM. The built-in voltages for the junction are about 1.1 V and 2.3 V for the sample in the metallic and insulating state, respectively. The sample can undergo the current induced MIT during applying field when the sample was heated at 40 and 50ºC. This is in agreement with the value obtained from the difference in the work functions of LSCO and VO2. The band structure of the heterojunction was proposed based on the results of analysis.

Keywords: hetrojection, Mott transition, switching , VO2

Procedia PDF Downloads 564
19187 Use of Front-Face Fluorescence Spectroscopy and Multiway Analysis for the Prediction of Olive Oil Quality Features

Authors: Omar Dib, Rita Yaacoub, Luc Eveleigh, Nathalie Locquet, Hussein Dib, Ali Bassal, Christophe B. Y. Cordella

Abstract:

The potential of front-face fluorescence coupled with chemometric techniques, namely parallel factor analysis (PARAFAC) and multiple linear regression (MLR) as a rapid analysis tool to characterize Lebanese virgin olive oils was investigated. Fluorescence fingerprints were acquired directly on 102 Lebanese virgin olive oil samples in the range of 280-540 nm in excitation and 280-700 nm in emission. A PARAFAC model with seven components was considered optimal with a residual of 99.64% and core consistency value of 78.65. The model revealed seven main fluorescence profiles in olive oil and was mainly associated with tocopherols, polyphenols, chlorophyllic compounds and oxidation/hydrolysis products. 23 MLR regression models based on PARAFAC scores were generated, the majority of which showed a good correlation coefficient (R > 0.7 for 12 predicted variables), thus satisfactory prediction performances. Acid values, peroxide values, and Delta K had the models with the highest predictions, with R values of 0.89, 0.84 and 0.81 respectively. Among fatty acids, linoleic and oleic acids were also highly predicted with R values of 0.8 and 0.76, respectively. Factors contributing to the model's construction were related to common fluorophores found in olive oil, mainly chlorophyll, polyphenols, and oxidation products. This study demonstrates the interest of front-face fluorescence as a promising tool for quality control of Lebanese virgin olive oils.

Keywords: front-face fluorescence, Lebanese virgin olive oils, multiple Linear regressions, PARAFAC analysis

Procedia PDF Downloads 433
19186 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts

Authors: Ş. Karabulut, A. Güllü, A. Güldaş, R. Gürbüz

Abstract:

This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.

Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis

Procedia PDF Downloads 433
19185 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term

Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu

Abstract:

In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.

Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records

Procedia PDF Downloads 200