Search results for: multi regression analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30974

Search results for: multi regression analysis

30464 Examining How Teachers’ Backgrounds and Perceptions for Technology Use Influence on Students’ Achievements

Authors: Zhidong Zhang, Amanda Resendez

Abstract:

This study is to examine how teachers’ perspective on education technology use in their class influence their students’ achievement. The authors hypothesized that teachers’ perspective can directly or indirectly influence students’ learning, performance, and achievements. In this study, a questionnaire entitled, Teacher’s Perspective on Educational Technology, was delivered to 63 teachers and 1268 students’ mathematics and reading achievement records were collected. The questionnaire consists of four parts: a) demographic variables, b) attitudes on technology integration, c) outside factor affecting technology integration, and d) technology use in the classroom. Kruskal-Wallis and hierarchical regression analysis techniques were used to examine: 1) the relationship between the demographic variables and teachers’ perspectives on educational technology, and 2) how the demographic variables were causally related to students’ mathematics and reading achievements. The study found that teacher demographics were significantly related to the teachers’ perspective on educational technology with p < 0.05 and p < 0.01 separately. These teacher demographical variables included the school district, age, gender, the grade currently teach, teaching experience, and proficiency using new technology. Further, these variables significantly predicted students’ mathematics and reading achievements with p < 0.05 and p < 0.01 separately. The variations of R² are between 0.176 and 0.467. That means 46.7% of the variance of a given analysis can be explained by the model.

Keywords: teacher's perception of technology use, mathematics achievement, reading achievement, Kruskal-Wallis test, hierarchical regression analysis

Procedia PDF Downloads 117
30463 The Effects of a Mathematics Remedial Program on Mathematics Success and Achievement among Beginning Mathematics Major Students: A Regression Discontinuity Analysis

Authors: Kuixi Du, Thomas J. Lipscomb

Abstract:

The proficiency in Mathematics skills is fundamental to success in the STEM disciplines. In the US, beginning college students who are placed in remedial/developmental Mathematics courses frequently struggle to achieve academic success. Therefore, Mathematics remediation in college has become an important concern, and providing Mathematics remediation is a prevalent way to help the students who may not be fully prepared for college-level courses. Programs vary, however, and the effectiveness of a particular remedial Mathematics program must be empirically demonstrated. The purpose of this study was to apply the sharp regression discontinuity (RD) technique to determine the effectiveness of the Jack Leaps Summer (JLS) Mathematic remediation program in supporting improved Mathematics learning outcomes among newly admitted Mathematics students in the South Dakota State University. The researchers studied the newly admitted Fall 2019 cohort of Mathematics majors (n=423). The results indicated that students whose pretest score was lower than the cut-off point and who were assigned to the JLS program experienced significantly higher scores on the post-test (Math 101 final score). Based on these results, there is evidence that the JLS program is effective in meeting its primary objective.

Keywords: causal inference, mathematisc remedial program evaluation, quasi-experimental research design, regression discontinuity design, cohort studies

Procedia PDF Downloads 76
30462 Empirical Research on Rate of Return, Interest Rate and Mudarabah Deposit

Authors: Inten Meutia, Emylia Yuniarti

Abstract:

The objective of this study is to analyze the effects of interest rate, the rate of return of Islamic banks on the amount of mudarabah deposits in Islamic banks. In analyzing the effect of rate of return in the Islamic banks and interest rate risk in the conventional banks, the 1-month Islamic deposit rate of return and 1 month fixed deposit interest rate of a total Islamic deposit are considered. Using data covering the period from January 2010 to Sepember 2013, the study applies the regression analysis to analyze the effect between variable and independence t-test to analyze the mean difference between rate of return and rate of interest. Regression analysis shows that rate of return have significantly negative influence on mudarabah deposits, while interest rate have negative influence but not significant. The result of independent t test shows that the interest rate is not different from the rate of return in Islamic Bank. It supports the hyphotesis that rate of return in Islamic banking mimic rate of interest in conventional bank. The results of the study have important implications on the risk management practices of the Islamic banks in Indonesia.

Keywords: conventional bank, interest rate, Islamic bank, rate of return

Procedia PDF Downloads 493
30461 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 43
30460 Farmers' Perception of the Effects of Climate Change on Rice Production in Nasarawa State, Nigeria

Authors: P. O. Fatoki, R. S. Olaleye, B. O. Adeniji

Abstract:

The study investigated farmers’ perception of the effects of climate change on rice production in Nasarawa State, Nigeria. Multi-stage sampling technique was used in selecting a total of 248 rice farmers from the study area. Data for the study were collected through the use of interview schedule. The data were analysed using both descriptive and inferential statistics. Results showed that majority (71.8%) of the respondents were married and the mean age of the respondents was 44.54 years. The results also showed that most adapted strategies for mitigating the effects of climate change on rice production were change of planting and harvesting date (67.7%), movement to another site (63.7%) and increased or reduced land size (58.5%). Relationship between the roles of extension agents in mitigating climate change effects on rice production and farmers’ perception were significant as revealed Chi-Square analysis from the study ; Dissemination of information ( = 2.16, P < 0.05) and use of demonstration methods ( = 2.15, P < 0.05). Poisson regression analysis revealed that educational status, farm size, experience and yield had significant relationship with the perception of the effects of climate change at 0.01 significance level while household size was as well significant at 0.05. It is recommended that some of the adaptive strategies and practices for mitigating the effects of climate change in rice production should be improved, while the extension outfits should be strengthened to ensure adequate dissemination of relevant information on climate change with a view to mitigate its effects on rice production.

Keywords: perception, rice farmers, climate change, mitigation, adaptive strategies

Procedia PDF Downloads 335
30459 Using Cooperation without Communication in a Multi-Agent Unpredictable Dynamic Real-Time Environment

Authors: Abbas Khosravi

Abstract:

This paper discusses the use of cooperation without communication in a multi-agent, unpredictable, dynamic real-time environment. The architecture of the Persian Gulf agent consists of three layers: fixed rule, low level, and high level layers, allowing for cooperation without direct communication. A scenario is presented to each agent in the form of a file, specifying each player's role and actions in the game. The scenario helps in cases of miscommunication, improving team performance. Cooperation without communication enhances reliability and coordination among agents, leading to better results in challenging situations.

Keywords: multi-agent systems, communication, Robocop, software engineering

Procedia PDF Downloads 19
30458 Factors Relating to Motivation to Change Behaviors in Individuals Who Are Overweight

Authors: Teresa Wills, Geraldine Mccarthy, Nicola Cornally

Abstract:

Background: Obesity is an emerging healthcare epidemic affecting virtually all age and socio-economic groups and is one of the most serious and prevalent diseases of the 21st century. It is a public health challenge because of its prevalence, associated costs and health effects. The increasing prevalence of obesity has created a social perception that overweight body sizes are healthy and normal. This normalization of obesity within our society and the acceptance of higher body weights have led to individuals being unaware of the reality of their weight status and gravity of this situation thus impeding recognition of obesity. Given the escalating global health problem of obesity and its co-morbidities, the need to re-appraise its management is more compelling than ever. It is widely accepted that the causes of obesity are complex and multi-factorial. Engagement of individuals in weight management programmes is difficult if they do not perceive they have a problem with their weight. Recognition of the problem is a key component of obesity management and identifying the main predictors of behaviour is key to designing health behaviour interventions. Aim: The aim of the research was to determine factors relating to motivation to change behaviours in individuals who perceive themselves to be overweight. Method: The research design was quantitative, correlational and cross-sectional. The design was guided by the Health Belief Model. Data were collected online using a multi-section and multi-item questionnaire, developed from a review of the theoretical and empirical research. A sample of 202 men and women who perceived themselves to be overweight participated in the research. Descriptive and inferential statistical analyses were employed to describe relationships between variables. Findings: Following multivariate regression analysis, perceived barriers to weight loss and perceived benefits of weight loss were significant predictors of motivation to change behaviour. The perceived barriers to weight loss which were significant were psychological barriers to weight loss (p = < 0.019) and environmental barriers to physical activity (p= < 0.032).The greatest predictor of motivation to change behaviour was the perceived benefits of weight loss (p < 0.001). Perceived susceptibility to obesity and perceived severity of obesity did not emerge as significant predictors in this model. Total variance explained by the model was 33.5%. Conclusion: Perceived barriers to weight loss and perceived benefits of weight loss are important determinants of motivation to change behaviour. These findings have important implications for health professionals to help inform their practice and for the development of intervention programmes to prevent and control obesity.

Keywords: motivation to change behaviours, obesity, predictors of behavior, interventions, overweight

Procedia PDF Downloads 399
30457 The Impact of a Staff Well-Being Service for a Multi-Site Research Study

Authors: Ruth Elvish, Alex Turner, Jen Wells

Abstract:

Over recent years there has been an increasing interest in the topic of well-being at work, and staff support is an area of continued growth. The present qualitative study explored the impact of a staff well-being service that was specifically attached to a five-year multi-site research programme (the Neighbourhoods and Dementia Study, funded by the ESRC/NIHR). The well-being service was led by a clinical psychologist, who offered 1:1 sessions for staff and co-researchers with dementia. To our knowledge, this service was the first of its kind. Methodology: Interviews were undertaken with staff who had used the service and who opted to take part in the study (n=7). Thematic analysis was used as the method of analysis. Findings: Themes included: triggers, mechanisms of change, impact/outcomes, and unique aspects of a dedicated staff well-being service. Conclusions: The study highlights stressors that are pertinent amongst staff within academic settings, and shows the ways in which a dedicated staff well-being service can impact on both professional and personal lives. Positive change was seen in work performance, self-esteem, relationships, and coping. This exploratory study suggests that this well-being service model should be further trialled and evaluated.

Keywords: academic, service, staff, support, well-being

Procedia PDF Downloads 183
30456 Examination of Relationship between Internet Addiction and Cyber Bullying in Adolescents

Authors: Adem Peker, Yüksel Eroğlu, İsmail Ay

Abstract:

As the information and communication technologies have become embedded in everyday life of adolescents, both their possible benefits and risks to adolescents are being identified. The information and communication technologies provide opportunities for adolescents to connect with peers and to access to information. However, as with other social connections, users of information and communication devices have the potential to meet and interact with in harmful ways. One emerging example of such interaction is cyber bullying. Cyber bullying occurs when someone uses the information and communication technologies to harass or embarrass another person. Cyber bullying can take the form of malicious text messages and e-mails, spreading rumours, and excluding people from online groups. Cyber bullying has been linked to psychological problems for cyber bullies and victims. Therefore, it is important to determine how internet addiction contributes to cyber bullying. Building on this question, this study takes a closer look at the relationship between internet addiction and cyber bullying. For this purpose, in this study, based on descriptive relational model, it was hypothesized that loss of control, excessive desire to stay online, and negativity in social relationships, which are dimensions of internet addiction, would be associated positively with cyber bullying and victimization. Participants were 383 high school students (176 girls and 207 boys; mean age, 15.7 years). Internet addiction was measured by using Internet Addiction Scale. The Cyber Victim and Bullying Scale was utilized to measure cyber bullying and victimization. The scales were administered to the students in groups in the classrooms. In this study, stepwise regression analyses were utilized to examine the relationships between dimensions of internet addiction and cyber bullying and victimization. Before applying stepwise regression analysis, assumptions of regression were verified. According to stepwise regression analysis, cyber bullying was predicted by loss of control (β=.26, p<.001) and negativity in social relationships (β=.13, p<.001). These variables accounted for 9 % of the total variance, with the loss of control explaining the higher percentage (8 %). On the other hand, cyber victimization was predicted by loss of control (β=.19, p<.001) and negativity in social relationships (β=.12, p<.001). These variables altogether accounted for 8 % of the variance in cyber victimization, with the best predictor loss of control (7 % of the total variance). The results of this study demonstrated that, as expected, loss of control and negativity in social relationships predicted cyber bullying and victimization positively. However, excessive desire to stay online did not emerge a significant predictor of both cyberbullying and victimization. Consequently, this study would enhance our understanding of the predictors of cyber bullying and victimization since the results proposed that internet addiction is related with cyber bullying and victimization.

Keywords: cyber bullying, internet addiction, adolescents, regression

Procedia PDF Downloads 296
30455 Robustified Asymmetric Logistic Regression Model for Global Fish Stock Assessment

Authors: Osamu Komori, Shinto Eguchi, Hiroshi Okamura, Momoko Ichinokawa

Abstract:

The long time-series data on population assessments are essential for global ecosystem assessment because the temporal change of biomass in such a database reflects the status of global ecosystem properly. However, the available assessment data usually have limited sample sizes and the ratio of populations with low abundance of biomass (collapsed) to those with high abundance (non-collapsed) is highly imbalanced. To allow for the imbalance and uncertainty involved in the ecological data, we propose a binary regression model with mixed effects for inferring ecosystem status through an asymmetric logistic model. In the estimation equation, we observe that the weights for the non-collapsed populations are relatively reduced, which in turn puts more importance on the small number of observations of collapsed populations. Moreover, we extend the asymmetric logistic regression model using propensity score to allow for the sample biases observed in the labeled and unlabeled datasets. It robustified the estimation procedure and improved the model fitting.

Keywords: double robust estimation, ecological binary data, mixed effect logistic regression model, propensity score

Procedia PDF Downloads 250
30454 Product Development of Standard Multi-Layer Sweet (Khanom- Chan) Recipe to Healthy for Thai Dessert

Authors: Tidarat Sanphom

Abstract:

Aim of this research is to development of Standard Layer pudding (Khanom-Chan) recipe to healthy Thai dessert. The objective are to study about standard recipe in multi-layer sweet. It was found that the appropriate recipe in multi-layer sweet, was consisted of rice starch 56 grams, tapioca starch 172 grams, arrowroot flour 98 grams, mung been-flour 16 grams, coconut milk 774 grams, fine sugar 374 grams, pandan leaf juice 47 grams and oil 5 grams.Then the researcher studied about the ratio of rice-berries flour to rice starch in multi-layer sweet at level of 30:70, 50:50, and only rice-berry flour 100 percentage. Result sensory evaluation, it was found the ratio of rice-berry flour to rice starch 30:70 had well score. The result of multi-layer sweet with rice-berry flour reduced sugar 20, 40 and 60 percentage found that 20 percentage had well score. Calculated total calories and calories from fat in Sweet layer cake with rice-berry flour reduced sugar 20 percentage had 250.04 kcal and 65.16 kcal.

Keywords: multi-layer sweet (Khanom-Chan), rice-berry flour, leaf juice, desert

Procedia PDF Downloads 411
30453 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 316
30452 Seismic Evaluation of Multi-Plastic Hinge Design Approach on RC Shear Wall-Moment Frame Systems against Near-Field Earthquakes

Authors: Mohsen Tehranizadeh, Mahboobe Forghani

Abstract:

The impact of higher modes on the seismic response of dual structural system consist of concrete moment-resisting frame and with RC shear walls is investigated against near-field earthquakes in this paper. a 20 stories reinforced concrete shear wall-special moment frame structure is designed in accordance with ASCE7 requirements and The nonlinear model of the structure was performed on OpenSees platform. Nonlinear time history dynamic analysis with 3 near-field records are performed on them. In order to further understand the structural collapse behavior in the near field, the response of the structure at the moment of collapse especially the formation of plastic hinges is explored. The results revealed that the amplification of moment at top of the wall due to higher modes, the plastic hinge can form in the upper part of wall, even when designed and detailed for plastic hinging at the base only (according to ACI code).on the other hand, shear forces in excess of capacity design values can develop due to the contribution of the higher modes of vibration to dynamic response due to the near field can cause brittle shear or sliding failure modes. The past investigation on shear walls clearly shows the dual-hinge design concept is effective at reducing the effects of the second mode of response. An advantage of the concept is that, when combined with capacity design, it can result in relaxation of special reinforcing detailing in large portions of the wall. In this study, to investigate the implications of multi-design approach, 4 models with varies arrangement of hinge plastics at the base and height of the shear wall are considered. results base on time history analysis showed that the dual or multi plastic hinges approach can be useful in order to control the high moment and shear demand of higher mode effect.

Keywords: higher mode effect, Near-field earthquake, nonlinear time history analysis, multi plastic hinge design

Procedia PDF Downloads 412
30451 Singular Perturbed Vector Field Method Applied to the Problem of Thermal Explosion of Polydisperse Fuel Spray

Authors: Ophir Nave

Abstract:

In our research, we present the concept of singularly perturbed vector field (SPVF) method, and its application to thermal explosion of diesel spray combustion. Given a system of governing equations, which consist of hidden Multi-scale variables, the SPVF method transfer and decompose such system to fast and slow singularly perturbed subsystems (SPS). The SPVF method enables us to understand the complex system, and simplify the calculations. Later powerful analytical, numerical and asymptotic methods (e.g method of integral (invariant) manifold (MIM), the homotopy analysis method (HAM) etc.) can be applied to each subsystem. We compare the results obtained by the methods of integral invariant manifold and SPVF apply to spray droplets combustion model. The research deals with the development of an innovative method for extracting fast and slow variables in physical mathematical models. The method that we developed called singular perturbed vector field. This method based on a numerical algorithm applied to global quasi linearization applied to given physical model. The SPVF method applied successfully to combustion processes. Our results were compared to experimentally results. The SPVF is a general numerical and asymptotical method that reveals the hierarchy (multi-scale system) of a given system.

Keywords: polydisperse spray, model reduction, asymptotic analysis, multi-scale systems

Procedia PDF Downloads 207
30450 A Stochastic Model to Predict Earthquake Ground Motion Duration Recorded in Soft Soils Based on Nonlinear Regression

Authors: Issam Aouari, Abdelmalek Abdelhamid

Abstract:

For seismologists, the characterization of seismic demand should include the amplitude and duration of strong shaking in the system. The duration of ground shaking is one of the key parameters in earthquake resistant design of structures. This paper proposes a nonlinear statistical model to estimate earthquake ground motion duration in soft soils using multiple seismicity indicators. Three definitions of ground motion duration proposed by literature have been applied. With a comparative study, we select the most significant definition to use for predict the duration. A stochastic model is presented for the McCann and Shah Method using nonlinear regression analysis based on a data set for moment magnitude, source to site distance and site conditions. The data set applied is taken from PEER strong motion databank and contains shallow earthquakes from different regions in the world; America, Turkey, London, China, Italy, Chili, Mexico...etc. Main emphasis is placed on soft site condition. The predictive relationship has been developed based on 600 records and three input indicators. Results have been compared with others published models. It has been found that the proposed model can predict earthquake ground motion duration in soft soils for different regions and sites conditions.

Keywords: duration, earthquake, prediction, regression, soft soil

Procedia PDF Downloads 138
30449 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 70
30448 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 151
30447 Exploring Factors Related to Unplanning Readmission of Elderly Patients in Taiwan

Authors: Hui-Yen Lee, Hsiu-Yun Wei, Guey-Jen Lin, Pi-Yueh Lee Lee

Abstract:

Background: Unplanned hospital readmissions increase healthcare costs and have been considered a marker of poor healthcare performance. The elderly face a higher risk of unplanned readmission due to elderly-specific characteristics such as deteriorating body functions and the relatively high incidence of complications after treatment of acute diseases. Purpose: The aim of this study was exploring the factors that relate to the unplanned readmission of elderly within 14 days of discharge at our hospital in southern Taiwan. Methods: We retrospectively reviewed the medical records of patients aged ≥65 years who had been re-admitted between January 2018 and December 2018.The Charlson Comorbidity score was calculated using previous used method. Related factors that affected the rate of unplanned readmission within 14 days of discharge were screened and analyzed using the chi-squared test and logistic regression analysis. Results: This study enrolled 829 subjects aged more than 65 years. The numbers of unplanned readmission patients within 14 days were 318 cases, while those did not belong to the unplanned readmission were 511 cases. In 2018, the rate of elderly patients in unplanned 14 days readmissions was 38.4%. The majority patients were females (166 cases, 52.2%), with an average age of 77.6 ± 7.90 years (65-98). The average value of Charlson Comorbidity score was 4.42±2.76. Using logistic regression analysis, we found that the gastric or peptic ulcer (OR=1.917 , P< 0.002), diabetes (OR= 0.722, P< 0.043), hemiplegia (OR= 2.292, P< 0.015), metastatic solid tumor (OR= 2.204, P< 0.025), hypertension (OR= 0.696, P< 0.044), and skin ulcer/cellulitis (OR= 2.747, P< 0.022) have significantly higher risk of 14-day readmissions. Conclusion: The results of the present study may assist the healthcare teams to understand the factors that may affect unplanned readmission in the elderly. We recommend that these teams give efficient approach in their medical practice, provide timely health education for elderly, and integrative healthcare for chronic diseases in order to reduce unplanned readmissions.

Keywords: unplanning readmission, elderly, Charlson comorbidity score, logistic regression analysis

Procedia PDF Downloads 121
30446 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique

Authors: Ghada A. Alfattni

Abstract:

Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates. 

Keywords: imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour

Procedia PDF Downloads 324
30445 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme

Authors: Cavidan Yakupoglu, Kurt Rohloff

Abstract:

In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.

Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE

Procedia PDF Downloads 131
30444 Solving Fuzzy Multi-Objective Linear Programming Problems with Fuzzy Decision Variables

Authors: Mahnaz Hosseinzadeh, Aliyeh Kazemi

Abstract:

In this paper, a method is proposed for solving Fuzzy Multi-Objective Linear Programming problems (FMOLPP) with fuzzy right hand side and fuzzy decision variables. To illustrate the proposed method, it is applied to the problem of selecting suppliers for an automotive parts producer company in Iran in order to find the number of optimal orders allocated to each supplier considering the conflicting objectives. Finally, the obtained results are discussed.

Keywords: fuzzy multi-objective linear programming problems, triangular fuzzy numbers, fuzzy ranking, supplier selection problem

Procedia PDF Downloads 366
30443 Research on the Conservation Strategy of Territorial Landscape Based on Characteristics: The Case of Fujian, China

Authors: Tingting Huang, Sha Li, Geoffrey Griffiths, Martin Lukac, Jianning Zhu

Abstract:

Territorial landscapes have experienced a gradual loss of their typical characteristics during long-term human activities. In order to protect the integrity of regional landscapes, it is necessary to characterize, evaluate and protect them in a graded manner. The study takes Fujian, China, as an example and classifies the landscape characters of the site at the regional scale, middle scale, and detailed scale. A multi-scale approach combining parametric and holistic approaches is used to classify and partition the landscape character types (LCTs) and landscape character areas (LCAs) at different scales, and a multi-element landscape assessment approach is adopted to explore the conservation strategies of the landscape character. Firstly, multiple fields and multiple elements of geography, nature and humanities were selected as the basis of assessment according to the scales. Secondly, the study takes a parametric approach to the classification and partitioning of landscape character, Principal Component Analysis, and two-stage cluster analysis (K-means and GMM) in MATLAB software to obtain LCTs, combines with Canny Operator Edge Detection Algorithm to obtain landscape character contours and corrects LCTs and LCAs by field survey and manual identification methods. Finally, the study adopts the Landscape Sensitivity Assessment method to perform landscape character conservation analysis and formulates five strategies for different LCAs: conservation, enhancement, restoration, creation, and combination. This multi-scale identification approach can efficiently integrate multiple types of landscape character elements, reduce the difficulty of broad-scale operations in the process of landscape character conservation, and provide a basis for landscape character conservation strategies. Based on the natural background and the restoration of regional characteristics, the results of landscape character assessment are scientific and objective and can provide a strong reference in regional and national scale territorial spatial planning.

Keywords: parameterization, multi-scale, landscape character identify, landscape character assessment

Procedia PDF Downloads 76
30442 Multi-Actors’ Scenario for Measuring Metropolitan Governance and Spatial Planning: A Case Study of Bangalore, India

Authors: H. S. Kumara

Abstract:

The rapid process of urbanization and the growing number of the metropolitan cities and its region call for better governance in India. This article attempts to argue that spatial planning really matters for measuring the governance at metropolitan scale. These study explore to metropolitan governance and spatial planning and its interrelationship issues, concepts and evolution of spatial planning in India and critically examines the multi actors’ scenario for measuring metropolitan governance by means of spatial planning in context with reviewing various master plans, concept of multi-actors viewpoint on role of spatial planning related to zoning regulations, master plan implementations and effective service delivery issues. This paper argues and concludes that the spatial planning of Bangalore directly impact on measuring metropolitan governance.

Keywords: metropolitan governance, spatial planning, service delivery, multi-actors’, opinion survey, master plan

Procedia PDF Downloads 576
30441 A Combined AHP-GP Model for Selecting Knowledge Management Tool

Authors: Ahmad Sarfaraz, Raiyad Herwies

Abstract:

In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.

Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making

Procedia PDF Downloads 364
30440 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model

Procedia PDF Downloads 414
30439 SiamMask++: More Accurate Object Tracking through Layer Wise Aggregation in Visual Object Tracking

Authors: Hyunbin Choi, Jihyeon Noh, Changwon Lim

Abstract:

In this paper, we propose SiamMask++, an architecture that performs layer-wise aggregation and depth-wise cross-correlation and introduce multi-RPN module and multi-MASK module to improve EAO (Expected Average Overlap), a representative performance evaluation metric for Visual Object Tracking (VOT) challenge. The proposed architecture, SiamMask++, has two versions, namely, bi_SiamMask++, which satisfies the real time (56fps) on systems equipped with GPUs (Titan XP), and rf_SiamMask++, which combines mask refinement modules for EAO improvements. Tests are performed on VOT2016, VOT2018 and VOT2019, the representative datasets of Visual Object Tracking tasks labeled as rotated bounding boxes. SiamMask++ perform better than SiamMask on all the three datasets tested. SiamMask++ is achieved performance of 62.6% accuracy, 26.2% robustness and 39.8% EAO, especially on the VOT2018 dataset. Compared to SiamMask, this is an improvement of 4.18%, 37.17%, 23.99%, respectively. In addition, we do an experimental in-depth analysis of how much the introduction of features and multi modules extracted from the backbone affects the performance of our model in the VOT task.

Keywords: visual object tracking, video, deep learning, layer wise aggregation, Siamese network

Procedia PDF Downloads 130
30438 Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II

Authors: Tao Yin, Zeqiang Zhang, Wei Liang, Yanqing Zeng, Yu Zhang

Abstract:

To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.

Keywords: waste disposal, disassembly line balancing, multi-robot station, robotic efficiency difference, HNSGA-II

Procedia PDF Downloads 214
30437 A Modified NSGA-II Algorithm for Solving Multi-Objective Flexible Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir

Abstract:

NSGA-II is one of the most well-known and most widely used evolutionary algorithms. In addition to its new versions, such as NSGA-III, there are several modified types of this algorithm in the literature. In this paper, a hybrid NSGA-II algorithm has been suggested for solving the multi-objective flexible job shop scheduling problem. For a better search, new neighborhood-based crossover and mutation operators are defined. To create new generations, the neighbors of the selected individuals by the tournament selection are constructed. Also, at the end of each iteration, before sorting, neighbors of a certain number of good solutions are derived, except for solutions protected by elitism. The neighbors are generated using a constraint-based neural network that uses various constructs. The non-dominated sorting and crowding distance operators are same as the classic NSGA-II. A comparison based on some multi-objective benchmarks from the literature shows the efficiency of the algorithm.

Keywords: flexible job shop scheduling problem, multi-objective optimization, NSGA-II algorithm, neighborhood structures

Procedia PDF Downloads 215
30436 Analysis of Bridge-Pile Foundation System in Multi-layered Non-Linear Soil Strata Using Energy-Based Method

Authors: Arvan Prakash Ankitha, Madasamy Arockiasamy

Abstract:

The increasing demand for adopting pile foundations in bridgeshas pointed towardsthe need to constantly improve the existing analytical techniques for better understanding of the behavior of such foundation systems. This study presents a simplistic approach using the energy-based method to assess the displacement responses of piles subjected to general loading conditions: Axial Load, Lateral Load, and a Bending Moment. The governing differential equations and the boundary conditions for a bridge pile embedded in multi-layered soil strata subjected to the general loading conditions are obtained using the Hamilton’s principle employing variational principles and minimization of energies. The soil non-linearity has been incorporated through simple constitutive relationships that account for degradation of soil moduli with increasing strain values.A simple power law based on published literature is used where the soil is assumed to be nonlinear-elastic and perfectly plastic. A Tresca yield surface is assumed to develop the soil stiffness variation with different strain levels that defines the non-linearity of the soil strata. This numerical technique has been applied to a pile foundation in a two - layered soil strata for a pier supporting the bridge and solved using the software MATLAB R2019a. The analysis yields the bridge pile displacements at any depth along the length of the pile. The results of the analysis are in good agreement with the published field data and the three-dimensional finite element analysis results performed using the software ANSYS 2019R3. The methodology can be extended to study the response of the multi-strata soil supporting group piles underneath the bridge piers.

Keywords: pile foundations, deep foundations, multilayer soil strata, energy based method

Procedia PDF Downloads 117
30435 Deep Learning-Based Automated Structure Deterioration Detection for Building Structures: A Technological Advancement for Ensuring Structural Integrity

Authors: Kavita Bodke

Abstract:

Structural health monitoring (SHM) is experiencing growth, necessitating the development of distinct methodologies to address its expanding scope effectively. In this study, we developed automatic structure damage identification, which incorporates three unique types of a building’s structural integrity. The first pertains to the presence of fractures within the structure, the second relates to the issue of dampness within the structure, and the third involves corrosion inside the structure. This study employs image classification techniques to discern between intact and impaired structures within structural data. The aim of this research is to find automatic damage detection with the probability of each damage class being present in one image. Based on this probability, we know which class has a higher probability or is more affected than the other classes. Utilizing photographs captured by a mobile camera serves as the input for an image classification system. Image classification was employed in our study to perform multi-class and multi-label classification. The objective was to categorize structural data based on the presence of cracks, moisture, and corrosion. In the context of multi-class image classification, our study employed three distinct methodologies: Random Forest, Multilayer Perceptron, and CNN. For the task of multi-label image classification, the models employed were Rasnet, Xceptionet, and Inception.

Keywords: SHM, CNN, deep learning, multi-class classification, multi-label classification

Procedia PDF Downloads 14