Search results for: universal testing machine
5581 Reactivation of Hydrated Cement and Recycled Concrete Powder by Thermal Treatment for Partial Replacement of Virgin Cement
Authors: Gustave Semugaza, Anne Zora Gierth, Tommy Mielke, Marianela Escobar Castillo, Nat Doru C. Lupascu
Abstract:
The generation of Construction and Demolition Waste (CDW) has globally increased enormously due to the enhanced need in construction, renovation, and demolition of construction structures. Several studies investigated the use of CDW materials in the production of new concrete and indicated the lower mechanical properties of the resulting concrete. Many other researchers considered the possibility of using the Hydrated Cement Powder (HCP) to replace a part of Ordinary Portland Cement (OPC), but only very few investigated the use of Recycled Concrete Powder (RCP) from CDW. The partial replacement of OPC for making new concrete intends to decrease the CO₂ emissions associated with OPC production. However, the RCP and HCP need treatment to produce the new concrete of required mechanical properties. The thermal treatment method has proven to improve HCP properties before their use. Previous research has stated that for using HCP in concrete, the optimum results are achievable by heating HCP between 400°C and 800°C. The optimum heating temperature depends on the type of cement used to make the Hydrated Cement Specimens (HCS), the crushing and heating method of HCP, and the curing method of the Rehydrated Cement Specimens (RCS). This research assessed the quality of recycled materials by using different techniques such as X-ray Diffraction (XRD), Differential Scanning Calorimetry (DSC) and thermogravimetry (TG), Scanning electron Microscopy (SEM), and X-ray Fluorescence (XRF). These recycled materials were thermally pretreated at different temperatures from 200°C to 1000°C. Additionally, the research investigated to what extent the thermally treated recycled cement could partially replace the OPC and if the new concrete produced would achieve the required mechanical properties. The mechanical properties were evaluated on the RCS, obtained by mixing the Dehydrated Cement Powder and Recycled Powder (DCP and DRP) with water (w/c = 0.6 and w/c = 0.45). The research used the compressive testing machine for compressive strength testing, and the three-point bending test was used to assess the flexural strength.Keywords: hydrated cement powder, dehydrated cement powder, recycled concrete powder, thermal treatment, reactivation, mechanical performance
Procedia PDF Downloads 1565580 Fraud Detection in Credit Cards with Machine Learning
Authors: Anjali Chouksey, Riya Nimje, Jahanvi Saraf
Abstract:
Online transactions have increased dramatically in this new ‘social-distancing’ era. With online transactions, Fraud in online payments has also increased significantly. Frauds are a significant problem in various industries like insurance companies, baking, etc. These frauds include leaking sensitive information related to the credit card, which can be easily misused. Due to the government also pushing online transactions, E-commerce is on a boom. But due to increasing frauds in online payments, these E-commerce industries are suffering a great loss of trust from their customers. These companies are finding credit card fraud to be a big problem. People have started using online payment options and thus are becoming easy targets of credit card fraud. In this research paper, we will be discussing machine learning algorithms. We have used a decision tree, XGBOOST, k-nearest neighbour, logistic-regression, random forest, and SVM on a dataset in which there are transactions done online mode using credit cards. We will test all these algorithms for detecting fraud cases using the confusion matrix, F1 score, and calculating the accuracy score for each model to identify which algorithm can be used in detecting frauds.Keywords: machine learning, fraud detection, artificial intelligence, decision tree, k nearest neighbour, random forest, XGBOOST, logistic regression, support vector machine
Procedia PDF Downloads 1495579 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 2045578 Grating Scale Thermal Expansion Error Compensation for Large Machine Tools Based on Multiple Temperature Detection
Authors: Wenlong Feng, Zhenchun Du, Jianguo Yang
Abstract:
To decrease the grating scale thermal expansion error, a novel method which based on multiple temperature detections is proposed. Several temperature sensors are installed on the grating scale and the temperatures of these sensors are recorded. The temperatures of every point on the grating scale are calculated by interpolating between adjacent sensors. According to the thermal expansion principle, the grating scale thermal expansion error model can be established by doing the integral for the variations of position and temperature. A novel compensation method is proposed in this paper. By applying the established error model, the grating scale thermal expansion error is decreased by 90% compared with no compensation. The residual positioning error of the grating scale is less than 15um/10m and the accuracy of the machine tool is significant improved.Keywords: thermal expansion error of grating scale, error compensation, machine tools, integral method
Procedia PDF Downloads 3685577 Modeling of Digital and Settlement Consolidation of Soil under Oedomete
Authors: Yu-Lin Shen, Ming-Kuen Chang
Abstract:
In addition to a considerable amount of machinery and equipment, intricacies of the transmission pipeline exist in Petrochemical plants. Long term corrosion may lead to pipeline thinning and rupture, causing serious safety concerns. With the advances in non-destructive testing technology, more rapid and long-range ultrasonic detection techniques are often used for pipeline inspection, EMAT without coupling to detect, it is a non-contact ultrasonic, suitable for detecting elevated temperature or roughened e surface of line. In this study, we prepared artificial defects in pipeline for Electromagnetic Acoustic Transducer Testing (EMAT) to survey the relationship between the defect location, sizing and the EMAT signal. It was found that the signal amplitude of EMAT exhibited greater signal attenuation with larger defect depth and length.. In addition, with bigger flat hole diameter, greater amplitude attenuation was obtained. In summary, signal amplitude attenuation of EMAT was affected by the defect depth, defect length and the hole diameter and size.Keywords: EMAT, artificial defect, NDT, ultrasonic testing
Procedia PDF Downloads 3335576 Seismic Assessment of a Pre-Cast Recycled Concrete Block Arch System
Authors: Amaia Martinez Martinez, Martin Turek, Carlos Ventura, Jay Drew
Abstract:
This study aims to assess the seismic performance of arch and dome structural systems made from easy to assemble precast blocks of recycled concrete. These systems have been developed by Lock Block Ltd. Company from Vancouver, Canada, as an extension of their currently used retaining wall system. The characterization of the seismic behavior of these structures is performed by a combination of experimental static and dynamic testing, and analytical modeling. For the experimental testing, several tilt tests, as well as a program of shake table testing were undertaken using small scale arch models. A suite of earthquakes with different characteristics from important past events are chosen and scaled properly for the dynamic testing. Shake table testing applying the ground motions in just one direction (in the weak direction of the arch) and in the three directions were conducted and compared. The models were tested with increasing intensity until collapse occurred; which determines the failure level for each earthquake. Since the failure intensity varied with type of earthquake, a sensitivity analysis of the different parameters was performed, being impulses the dominant factor. For all cases, the arches exhibited the typical four-hinge failure mechanism, which was also shown in the analytical model. Experimental testing was also performed reinforcing the arches using a steel band over the structures anchored at both ends of the arch. The models were tested with different pretension levels. The bands were instrumented with strain gauges to measure the force produced by the shaking. These forces were used to develop engineering guidelines for the design of the reinforcement needed for these systems. In addition, an analytical discrete element model was created using 3DEC software. The blocks were designed as rigid blocks, assigning all the properties to the joints including also the contribution of the interlocking shear key between blocks. The model is calibrated to the experimental static tests and validated with the obtained results from the dynamic tests. Then the model can be used to scale up the results to the full scale structure and expanding it to different configurations and boundary conditions.Keywords: arch, discrete element model, seismic assessment, shake-table testing
Procedia PDF Downloads 2075575 Regression Model Evaluation on Depth Camera Data for Gaze Estimation
Authors: James Purnama, Riri Fitri Sari
Abstract:
We investigate the machine learning algorithm selection problem in the term of a depth image based eye gaze estimation, with respect to its essential difficulty in reducing the number of required training samples and duration time of training. Statistics based prediction accuracy are increasingly used to assess and evaluate prediction or estimation in gaze estimation. This article evaluates Root Mean Squared Error (RMSE) and R-Squared statistical analysis to assess machine learning methods on depth camera data for gaze estimation. There are 4 machines learning methods have been evaluated: Random Forest Regression, Regression Tree, Support Vector Machine (SVM), and Linear Regression. The experiment results show that the Random Forest Regression has the lowest RMSE and the highest R-Squared, which means that it is the best among other methods.Keywords: gaze estimation, gaze tracking, eye tracking, kinect, regression model, orange python
Procedia PDF Downloads 5395574 Comparing and Contrasting Western and Eastern Ways of War: Building a Universal Strategic Theory
Authors: Adam Kok Wey Leong
Abstract:
The comparison between the Western ways of war and Eastern ways of war has raised contemporary debates on the validity of these arguments. The Western way of war is popularly propounded by Victor Davis Hanson as originating from the Greek hoplite tactics, direct military maneuvers, democratic principles and social freedom and cohesion that has continued to yield military success for the Western powers for centuries. On the other hand, the Eastern way of war has been deemed as relying on indirect tactics, deception, and ruses. This often accepted notion of the divide between Western and Eastern style does not sustain in view of the available classical strategic texts from both sides from the same period that has proposed similar principles of warfare. This paper analyses the similarities between classical strategic texts on war from the Eastern perspective namely Sun Tzu’s Art of War with a similar temporal strategic text from the West which is Sextus Iuluis Frontinus’s Stratagematon, and deduces answers to this core research question - Does the hypothesis of the existence of distinctive Western and Eastern ways of warfare stands? The main thesis advanced by this research is that ways of warfare share universal principles, and it transcends cultural and spatial boundaries. Warfare is a human endeavour, and the same moral actions guide humans from different geo-cultural spheres in warfare’s objectives, which are winning over an enemy in the most economical way and serve as a mean to an end.Keywords: ways of warfare, strategic culture, strategy, Sun Tzu, frontinus
Procedia PDF Downloads 4795573 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms
Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager
Abstract:
This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties
Procedia PDF Downloads 545572 Health Sector Budgetary Allocations and Their Implications on Health Service Delivery and Universal Health Coverage in Uganda
Authors: Richard Ssempala, Francis Kintu, Christine K. Tashobya
Abstract:
Funding for health remains a key constraint facing many developing countries, Uganda inclusive. Uganda’s health sector budget to the national budgetary allocation has stagnated between 8.2% to 10% over the years. Using data collected from different government documents, we sought to establish the implications of the budget allocation over the period (FY2010/11-2018/19) on health services delivery in Uganda to inform policymakers specifically Members of Parliament who are critical in making sectorial allocation on the steps they can adapt to change the terrain of health financing in Uganda. Findings revealed that the contribution of public funding to the health sector is low (15.7%) with private sources (42.6%) and donors contributing much more, with the bulk of private funds, are out of pocket. The study further revealed that low budget allocation had been manifested in inadequate and poorly motivated health workers, essential drug stock-outs that ultimately contribute to poor access to services, catastrophic health expenditures, and high morbidity rates. We recommend for a substantial and sustained increase in the government health budget, optimizing the available resources by addressing wastages, prioritizing health promotion, prevention and finally, institutionalizing the National Health Insurance Scheme.Keywords: budget allocations, universal health coverage, health service delivery, Uganda
Procedia PDF Downloads 1935571 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 1335570 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine
Procedia PDF Downloads 125569 Machine Learning for Targeting of Conditional Cash Transfers: Improving the Effectiveness of Proxy Means Tests to Identify Future School Dropouts and the Poor
Authors: Cristian Crespo
Abstract:
Conditional cash transfers (CCTs) have been targeted towards the poor. Thus, their targeting assessments check whether these schemes have been allocated to low-income households or individuals. However, CCTs have more than one goal and target group. An additional goal of CCTs is to increase school enrolment. Hence, students at risk of dropping out of school also are a target group. This paper analyses whether one of the most common targeting mechanisms of CCTs, a proxy means test (PMT), is suitable to identify the poor and future school dropouts. The PMT is compared with alternative approaches that use the outputs of a predictive model of school dropout. This model was built using machine learning algorithms and rich administrative datasets from Chile. The paper shows that using machine learning outputs in conjunction with the PMT increases targeting effectiveness by identifying more students who are either poor or future dropouts. This joint targeting approach increases effectiveness in different scenarios except when the social valuation of the two target groups largely differs. In these cases, the most likely optimal approach is to solely adopt the targeting mechanism designed to find the highly valued group.Keywords: conditional cash transfers, machine learning, poverty, proxy means tests, school dropout prediction, targeting
Procedia PDF Downloads 2055568 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 2655567 Comparison of Cardiovascular and Metabolic Responses Following In-Water and On-Land Jump in Postmenopausal Women
Authors: Kuei-Yu Chien, Nai-Wen Kan, Wan-Chun Wu, Guo-Dong Ma, Shu-Chen Chen
Abstract:
Purpose: The purpose of this study was to investigate the responses of systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), rating of perceived exertion (RPE) and lactate following continued high-intensity interval exercise in water and on land. The results of studies can be an exercise program design reference for health care and fitness professionals. Method: A total of 20 volunteer postmenopausal women was included in this study. The inclusion criteria were: duration of menopause > 1 year; and sedentary lifestyle, defined as engaging in moderate-intensity exercise less than three times per week, or less than 20 minutes per day. Participants need to visit experimental place three times. The first time visiting, body composition was performed and participant filled out the questionnaire. Participants were assigned randomly to the exercise environment (water or land) in second and third time visiting. Water exercise testing was under water of trochanter level. In continuing jump testing, each movement consisted 10-second maximum volunteer jump for two sets. 50% heart rate reserve dynamic resting (walking or running) for one minute was within each set. SBP, DBP, HR, RPE of whole body/thigh (RPEW/RPET) and lactate were performed at pre and post testing. HR, RPEW, and RPET were monitored after 1, 2, and 10 min of exercise testing. SBP and DBP were performed after 10 and 30 min of exercise testing. Results: The responses of SBP and DBP after exercise testing in water were higher than those on land. Lactate levels after exercise testing in water were lower than those on land. The responses of RPET were lower than those on land post exercise 1 and 2 minutes. The heart rate recovery in water was faster than those on land at post exercise 5 minutes. Conclusion: This study showed water interval jump exercise induces higher cardiovascular responses with lower RPE responses and lactate levels than on-land jumps exercise in postmenopausal women. Fatigue is one of the major reasons to obstruct exercise behavior. Jump exercise could enhance cardiorespiratory fitness, the lower-extremity power, strength, and bone mass. There are several health benefits to the middle to older adults. This study showed that water interval jumping could be more relaxed and not tried to reach the same land-based cardiorespiratory exercise intensity.Keywords: interval exercise, power, recovery, fatigue
Procedia PDF Downloads 4105566 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 1565565 Design of Fuzzy Logic Based Global Power System Stabilizer for Dynamic Stability Enhancement in Multi-Machine Power System
Authors: N. P. Patidar, J. Earnest, Laxmikant Nagar, Akshay Sharma
Abstract:
This paper describes the diligence of a new input signal based fuzzy power system stabilizer in multi-machine power system. Instead of conventional input pairs like speed deviation (∆ω) and derivative of speed deviation i.e. acceleration (∆ω ̇) or speed deviation and accelerating power deviation of each machine, in this paper, deviation of active power through the tie line colligating two areas is used as one of the inputs to the fuzzy logic controller in concurrence with the speed deviation. Fuzzy Logic has the features of simple concept, easy effectuation, and computationally efficient. The advantage of this input is that, the same signal can be fed to each of the fuzzy logic controller connected with each machine. The simulated system comprises of two fully symmetrical areas coupled together by two 230 kV lines. Each area is equipped with two superposable generators rated 20 kV/900MVA and area-1 is exporting 413 MW to area-2. The effectiveness of the proposed control scheme has been assessed by performing small signal stability assessment and transient stability assessment. The proposed control scheme has been compared with a conventional PSS. Digital simulation is used to demonstrate the performance of fuzzy logic controller.Keywords: Power System Stabilizer (PSS), small signal stability, inter-area oscillation, fuzzy logic controller, membership function, rule base
Procedia PDF Downloads 5335564 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance
Authors: Yash Bingi, Yiqiao Yin
Abstract:
Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations
Procedia PDF Downloads 1445563 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 2425562 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 1095561 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population
Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath
Abstract:
Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics
Procedia PDF Downloads 1645560 Hybrid Approach for Country’s Performance Evaluation
Authors: C. Slim
Abstract:
This paper presents an integrated model, which hybridized data envelopment analysis (DEA) and support vector machine (SVM) together, to class countries according to their efficiency and performance. This model takes into account aspects of multi-dimensional indicators, decision-making hierarchy and relativity of measurement. Starting from a set of indicators of performance as exhaustive as possible, a process of successive aggregations has been developed to attain an overall evaluation of a country’s competitiveness.Keywords: Artificial Neural Networks (ANN), Support vector machine (SVM), Data Envelopment Analysis (DEA), Aggregations, indicators of performance
Procedia PDF Downloads 3405559 Alternator Fault Detection Using Wigner-Ville Distribution
Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi
Abstract:
This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution
Procedia PDF Downloads 3745558 Determination of Water Pollution and Water Quality with Decision Trees
Authors: Çiğdem Bakır, Mecit Yüzkat
Abstract:
With the increasing emphasis on water quality worldwide, the search for and expanding the market for new and intelligent monitoring systems has increased. The current method is the laboratory process, where samples are taken from bodies of water, and tests are carried out in laboratories. This method is time-consuming, a waste of manpower, and uneconomical. To solve this problem, we used machine learning methods to detect water pollution in our study. We created decision trees with the Orange3 software we used in our study and tried to determine all the factors that cause water pollution. An automatic prediction model based on water quality was developed by taking many model inputs such as water temperature, pH, transparency, conductivity, dissolved oxygen, and ammonia nitrogen with machine learning methods. The proposed approach consists of three stages: preprocessing of the data used, feature detection, and classification. We tried to determine the success of our study with different accuracy metrics and the results. We presented it comparatively. In addition, we achieved approximately 98% success with the decision tree.Keywords: decision tree, water quality, water pollution, machine learning
Procedia PDF Downloads 835557 Graded Orientation of the Linear Polymers
Authors: Levan Nadareishvili, Roland Bakuradze, Barbara Kilosanidze, Nona Topuridze, Liana Sharashidze, Ineza Pavlenishvili
Abstract:
Some regularities of formation of a new structural state of the thermoplastic polymers-gradually oriented (stretched) state (GOS) are discussed. Transition into GOS is realized by the graded oriented stretching-by action of inhomogeneous mechanical field on the isotropic linear polymers or by zonal stretching that is implemented on a standard tensile-testing machine with using a specially designed zone stretching device (ZSD). Both technical approaches (especially zonal stretching method) allows to manage the such quantitative parameters of gradually oriented polymers as a range of change in relative elongation/orientation degree, length of this change and profile (linear, hyperbolic, parabolic, logarithmic, etc.). Uniaxial graded stretching method should be considered as an effective technological solution to create polymer materials with a predetermined gradient of physical properties.Keywords: controlled graded stretching, gradually oriented state, linear polymers, zone stretching device
Procedia PDF Downloads 4375556 Chemical Modifications of Three Underutilized Vegetable Fibres for Improved Composite Value Addition and Dye Absorption Performance
Authors: Abayomi O. Adetuyi, Jamiu M. Jabar, Samuel O. Afolabi
Abstract:
Vegetable fibres are classes of fibres of low density, biodegradable and non-abrasive that are largely abundant fibre materials with specific properties and mostly found/ obtained in plants on earth surface. They are classified into three categories, depending on the part of the plant from which they are gotten from namely: fruit, Blast and Leaf fibre. Ever since four/five millennium B.C, attention has been focussing on the commonest and highly utilized cotton fibre obtained from the fruit of cotton plants (Gossypium spp), for the production of cotton fabric used in every home today. The present study, therefore, focused on the ability of three underutilized vegetable (fruit) fibres namely: coir fiber (Eleas coniferus), palm kernel fiber and empty fruit bunch fiber (Elias guinensis) through chemical modifications for better composite value addition performance to polyurethane form and dye adsorption. These fibres were sourced from their parents’ plants, identified and cleansed with 2% hot detergent solution 1:100, rinsed in distilled water and oven-dried to constant weight, before been chemically modified through alkali bleaching, mercerization and acetylation. The alkali bleaching involves treating 0.5g of each fiber material with 100 mL of 2% H2O2 in 25 % NaOH solution with refluxing for 2 h. While that of mercerization and acetylation involves the use of 5% sodium hydroxide NaOH solution for 2 h and 10% acetic acid- acetic anhydride 1:1 (v/v) (CH3COOH) / (CH3CO)2O solution with conc. H2SO4 as catalyst for 1 h, respectively on the fibres. All were subsequently washed thoroughly with distilled water and oven dried at 105 0C for 1 h. These modified fibres were incorporated as composite into polyurethane form and used in dye adsorption study of indigo. The first two treatments led to fiber weight reduction, while the acidified acetic anhydride treatment gave the fibers weight increment. All the treated fibers were found to be of less hydrophilic nature, better mechanical properties, higher thermal stabilities as well as better adsorption surfaces/capacities than the untreated ones. These were confirmed by gravimetric analysis, Instron Universal Testing Machine, Thermogravimetric Analyser and the Scanning Electron Microscope (SEM) respectively. The fiber morphology of the modified fibers showed smoother surfaces than unmodified fibres.The empty fruit bunch fibre and the coconut coir fibre are better than the palm kernel fibres as reinforcers for composites or as adsorbents for waste-water treatment. Acetylation and alkaline bleaching treatment improve the potentials of the fibres more than mercerization treatment. Conclusively, vegetable fibres, especially empty fruit bunch fibre and the coconut coir fibre, which are cheap, abundant and underutilized, can replace the very costly powdered activated carbon in wastewater treatment and as reinforcer in foam.Keywords: chemical modification, industrial application, value addition, vegetable fibre
Procedia PDF Downloads 3335555 Delamination of Scale in a Fe Carbon Steel Surface by Effect of Interface Roughness and Oxide Scale Thickness
Authors: J. M. Lee, W. R. Noh, C. Y. Kim, M. G. Lee
Abstract:
Delamination of oxide scale has been often discovered at the interface between Fe carbon steel and oxide scale. Among several mechanisms of this delamination behavior, the normal tensile stress to the substrate-scale interface has been described as one of the main factors. The stress distribution at the interface is also known to be affected by thermal expansion mismatch between substrate and oxide scale, creep behavior during cooling and the geometry of the interface. In this study, stress states near the interface in a Fe carbon steel with oxide scale have been investigated using FE simulations. The thermal and mechanical properties of oxide scales are indicated in literature and Fe carbon steel is measured using tensile testing machine. In particular, the normal and shear stress components developed at the interface during bending are investigated. Preliminary numerical sensitivity analyses are provided to explain the effects of the interface geometry and oxide thickness on the delamination behavior.Keywords: oxide scale, delamination, Fe analysis, roughness, thickness, stress state
Procedia PDF Downloads 3445554 Performance of Visual Inspection Using Acetic Acid for Cervical Cancer Screening as Compared to HPV DNA Testingin Ethiopia: A Comparative Cross-Sectional Study
Authors: Agajie Likie Bogale, Tilahun Teklehaymanot, Getnet Mitike Kassie, Girmay Medhin, Jemal Haidar Ali, Nega Berhe Belay
Abstract:
Objectives: The aim of this study is to evaluate the performance of visual inspection using acetic acid compared with HPV DNA testing among women living with HIV in Ethiopia. Methods: Acomparative cross-sectional study was conducted to address the aforementioned objective. Data were collected from January to October 2021 to compare the performance of these two screening modalities. Trained clinicians collected cervical specimens and immediately applied acetic acid for visual inspection. The HPV DNA testing was done using Abbott m2000rt/SP by trained laboratory professionals in accredited laboratories. A total of 578 HIV positive women with age 25-49 years were included. Results: Test positivity was 8.9% using VIA and 23.3% using HPV DNA test. The sensitivity and specificity of the VIA test were 19.2% and 95.1%, respectively, while the positive and negative predictive values of the VIA test were 54.4% and 79.4%, respectively. The strength of agreement between the two screening methods was poor (k=0.184), and the area under the curve was 0.572. The burden of genetic distribution of high risk HPV16 was 3.8%, and mixed HPV16& other HR HPV was 1.9%. Other high risk HPV types were predominant in this study (15.7%). Conclusion: The high positivity result using HPV DNA testing compared with VIA, and low sensitivity of VIA are indicating that the implementation of HPV DNA testing as the primary screening strategy is likely to reduce cervical cancer cases and deaths of women in the country.Keywords: cervical cancer screening, HPV DNA, VIA, Ethiopia
Procedia PDF Downloads 1445553 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions
Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla
Abstract:
With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect
Procedia PDF Downloads 415552 Automatic Lead Qualification with Opinion Mining in Customer Relationship Management Projects
Authors: Victor Radich, Tania Basso, Regina Moraes
Abstract:
Lead qualification is one of the main procedures in Customer Relationship Management (CRM) projects. Its main goal is to identify potential consumers who have the ideal characteristics to establish a profitable and long-term relationship with a certain organization. Social networks can be an important source of data for identifying and qualifying leads since interest in specific products or services can be identified from the users’ expressed feelings of (dis)satisfaction. In this context, this work proposes the use of machine learning techniques and sentiment analysis as an extra step in the lead qualification process in order to improve it. In addition to machine learning models, sentiment analysis or opinion mining can be used to understand the evaluation that the user makes of a particular service, product, or brand. The results obtained so far have shown that it is possible to extract data from social networks and combine the techniques for a more complete classification.Keywords: lead qualification, sentiment analysis, opinion mining, machine learning, CRM, lead scoring
Procedia PDF Downloads 89