Search results for: support vector machines
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7904

Search results for: support vector machines

7604 Characteristics of Double-Stator Inner-Rotor Axial Flux Permanent Magnet Machine with Rotor Eccentricity

Authors: Dawoon Choi, Jian Li, Yunhyun Cho

Abstract:

Axial Flux Permanent Magnet (AFPM) machines have been widely used in various applications due to their important merits, such as compact structure, high efficiency and high torque density. This paper presents one of the most important characteristics in the design process of the AFPM device, which is a recent issue. To design AFPM machine, the predicting electromagnetic forces between the permanent magnets and stator is important. Because of the magnitude of electromagnetic force affects many characteristics such as machine size, noise, vibration, and quality of output power. Theoretically, this force is canceled by the equilibrium of force when it is in the middle of the gap, but it is inevitable to deviate due to manufacturing problems in actual machine. Such as large scale wind generator, because of the huge attractive force between rotor and stator disks, this is more serious in getting large power applications such as large. This paper represents the characteristics of Double-Stator Inner –Rotor AFPM machines when it has rotor eccentricity. And, unbalanced air-gap and inclined air-gap condition which is caused by rotor offset and tilt in a double-stator single inner-rotor AFPM machine are each studied in electromagnetic and mechanical aspects. The output voltage and cogging torque under un-normal air-gap condition of AF machines are firstly calculated using a combined analytical and numerical methods, followed by a structure analysis to study the effect to mechanical stress, deformation and bending forces on bearings. Results and conclusions given in this paper are instructive for the successful development of AFPM machines.

Keywords: axial flux permanent magnet machine, inclined air gap, unbalanced air gap, rotor eccentricity

Procedia PDF Downloads 187
7603 Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud

Authors: Yoji Yamato

Abstract:

In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions.

Keywords: OpenStack, cloud computing, automatic verification, jenkins

Procedia PDF Downloads 455
7602 Two-Stage Flowshop Scheduling with Unsystematic Breakdowns

Authors: Fawaz Abdulmalek

Abstract:

The two-stage flowshop assembly scheduling problem is considered in this paper. There are more than one parallel machines at stage one and an assembly machine at stage two. The jobs will be processed into the flowshop based on Johnson rule and two extensions of Johnson rule. A simulation model of the two-stage flowshop is constructed where both machines at stage one are subject to random failures. Three simulation experiments will be conducted to test the effect of the three job ranking rules on the makespan. Johnson Largest heuristic outperformed both Johnson rule and Johnson Smallest heuristic for two performed experiments for all scenarios where each experiments having five scenarios.

Keywords: flowshop scheduling, random failures, johnson rule, simulation

Procedia PDF Downloads 306
7601 Development of Gamma Configuration Stirling Engine Using Polymeric and Metallic Additive Manufacturing for Education

Authors: J. Otegui, M. Agirre, M. A. Cestau, H. Erauskin

Abstract:

The increasing accessibility of mid-priced additive manufacturing (AM) systems offers a chance to incorporate this technology into engineering instruction. Furthermore, AM facilitates the creation of manufacturing designs, enhancing the efficiency of various machines. One example of these machines is the Stirling cycle engine. It encompasses complex thermodynamic machinery, revealing various aspects of mechanical engineering expertise upon closer inspection. In this publication, the application of Stirling Engines fabricated via additive manufacturing techniques will be showcased for the purpose of instructive design and product enhancement. The performance of a Stirling engine's conventional displacer and piston is contrasted. The outcomes of utilizing this instructional tool in teaching are demonstrated.

Keywords: 3D printing, additive manufacturing, mechanical design, stirling engine.

Procedia PDF Downloads 17
7600 Biodegradable Drinking Straws Made From Naturally Dried and Fallen Coconut Leaves: Impact on Rural Circular Economy and Environmental Sustainability

Authors: Saji Varghese

Abstract:

Naturally dried and fallen coconut leaves and found in abundance in India and other coconut growing regions of the world. These fallen coconut leaves are usually burnt by farmers in landfills and open kitchens, leading to CO2 and particulate emissions. The innovation of biodegradable drinking straws from naturally dried and fallen coconut leaves by this researcher and his team has opened up opportunities to create value out of this agri-waste leading to i. prevention of burning of these discarded leaves ii. income generating opportunities to women in rural areas of coconut growing regions iii. an alternative to single use plastic straws. The team has developed five special purpose machines, which are deployed in the three villages on a pilot basis where 36 women are employed. The women are trained in the use of these machines, and the straws which are in good demand are sold globally. The present paper analyses the prospective impact of this innovation on the incomes of women working at the straw production centres and the consequent impact on their standards of living, The paper also analyses the impact of this innovation in the reduction of CO2 and particulate emissions and makes a case for support from Govt and Non Govt organizations in coconut growing regions to set up straw production centres to boost rural circular economy and to reduce carbon footprint and eliminate plastic pollution

Keywords: drinking straws, coconut leaves, circular economy, sustainability

Procedia PDF Downloads 105
7599 A Data-Mining Model for Protection of FACTS-Based Transmission Line

Authors: Ashok Kalagura

Abstract:

This paper presents a data-mining model for fault-zone identification of flexible AC transmission systems (FACTS)-based transmission line including a thyristor-controlled series compensator (TCSC) and unified power-flow controller (UPFC), using ensemble decision trees. Given the randomness in the ensemble of decision trees stacked inside the random forests model, it provides an effective decision on the fault-zone identification. Half-cycle post-fault current and voltage samples from the fault inception are used as an input vector against target output ‘1’ for the fault after TCSC/UPFC and ‘1’ for the fault before TCSC/UPFC for fault-zone identification. The algorithm is tested on simulated fault data with wide variations in operating parameters of the power system network, including noisy environment providing a reliability measure of 99% with faster response time (3/4th cycle from fault inception). The results of the presented approach using the RF model indicate the reliable identification of the fault zone in FACTS-based transmission lines.

Keywords: distance relaying, fault-zone identification, random forests, RFs, support vector machine, SVM, thyristor-controlled series compensator, TCSC, unified power-flow controller, UPFC

Procedia PDF Downloads 402
7598 Machine Learning Approach for Predicting Students’ Academic Performance and Study Strategies Based on Their Motivation

Authors: Fidelia A. Orji, Julita Vassileva

Abstract:

This research aims to develop machine learning models for students' academic performance and study strategy prediction, which could be generalized to all courses in higher education. Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) used in building the models are chosen based on prior studies, which revealed that the attributes are essential in students’ learning process. Previous studies revealed the individual effects of each of these attributes on students’ learning progress. However, few studies have investigated the combined effect of the attributes in predicting student study strategy and academic performance to reduce the dropout rate. To bridge this gap, we used Scikit-learn in python to build five machine learning models (Decision Tree, K-Nearest Neighbour, Random Forest, Linear/Logistic Regression, and Support Vector Machine) for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the tree-based models such as the random forest (with prediction accuracy of 94.9%) and decision tree show the best results compared to the linear, support vector, and k-nearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adapt/personalize the learning process.

Keywords: classification models, learning strategy, predictive modeling, regression models, student academic performance, student motivation, supervised machine learning

Procedia PDF Downloads 94
7597 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 301
7596 0.13-μm CMOS Vector Modulator for Wireless Backhaul System

Authors: J. S. Kim, N. P. Hong

Abstract:

In this paper, a CMOS vector modulator designed for wireless backhaul system based on 802.11ac is presented. A poly phase filter and sign select switches yield two orthogonal signal paths. Two variable gain amplifiers with strongly reduced phase shift of only ±5 ° are used to weight these paths. It has a phase control range of 360 ° and a gain range of -10 dB to 10 dB. The current drawn from a 1.2 V supply amounts 20.4 mA. Using a 0.13 mm technology, the chip die area amounts 1.47x0.75 mm².

Keywords: CMOS, phase shifter, backhaul, 802.11ac

Procedia PDF Downloads 355
7595 Utilization of Logging Residue to Reduce Soil Disturbance of Timber Harvesting

Authors: Juang R. Matangaran, Qi Adlan

Abstract:

Industrial plantation forest in Indonesia was developed in 1983, and since then, several companies have been successfully planted a total area of concessionaire approximately 10 million hectares. Currently, these plantation forests have their annual harvesting period. In the timber harvesting process, amount part of the trees generally become logging residue. Tree parts such as branches, twigs, defected stem and leaves are unused section of tree on the ground after timber harvesting. The use of heavy machines in timber harvesting area has caused damage to the forest soil. The negative impact of such machines includes loss of topsoil, soil erosion, and soil compaction. Forest soil compaction caused reduction of forest water infiltration, increase runoff and causes difficulty for root penetration. In this study, we used logging residue as soil covers on the passages passed by skidding machines in order to observe the reduction soil compaction. Bulk density of soil was measured and analyzed after several times of skidding machines passage on skid trail. The objective of the research was to analyze the effect of logging residue on reducing soil compaction. The research was taken place at one of the industrial plantation forest area of South Sumatra Indonesia. The result of the study showed that percentage increase of soil compaction bare soil was larger than soil surface covered by logging residue. The maximum soil compaction occurred after 4 to 5 passes on soil without logging residue or bare soil and after 7 to 8 passes on soil cover by logging residue. The use of logging residue coverings could reduce soil compaction from 45% to 60%. The logging residue was effective in decreasing soil disturbance of timber harvesting at the plantation forest area.

Keywords: bulk density, logging residue, plantation forest, soil compaction, timber harvesting

Procedia PDF Downloads 373
7594 Job Shop Scheduling: Classification, Constraints and Objective Functions

Authors: Majid Abdolrazzagh-Nezhad, Salwani Abdullah

Abstract:

The job-shop scheduling problem (JSSP) is an important decision facing those involved in the fields of industry, economics and management. This problem is a class of combinational optimization problem known as the NP-hard problem. JSSPs deal with a set of machines and a set of jobs with various predetermined routes through the machines, where the objective is to assemble a schedule of jobs that minimizes certain criteria such as makespan, maximum lateness, and total weighted tardiness. Over the past several decades, interest in meta-heuristic approaches to address JSSPs has increased due to the ability of these approaches to generate solutions which are better than those generated from heuristics alone. This article provides the classification, constraints and objective functions imposed on JSSPs that are available in the literature.

Keywords: job-shop scheduling, classification, constraints, objective functions

Procedia PDF Downloads 411
7593 Distributed Cost-Based Scheduling in Cloud Computing Environment

Authors: Rupali, Anil Kumar Jaiswal

Abstract:

Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc.  Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively.  Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.

Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model

Procedia PDF Downloads 135
7592 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 37
7591 Difference between 'HDR Ir-192 and Co-60 Sources' for High Dose Rate Brachytherapy Machine

Authors: Md Serajul Islam

Abstract:

High Dose Rate (HDR) Brachytherapy is used for cancer patients. In our country’s prospect, we are using only cervices and breast cancer treatment by using HDR. The air kerma rate in air at a reference distance of less than a meter from the source is the recommended quantity for the specification of gamma ray source Ir-192 in brachytherapy. The absorbed dose for the patients is directly proportional to the air kerma rate. Therefore the air kerma rate should be determined before the first use of the source on patients by qualified medical physicist who is independent from the source manufacturer. The air kerma rate will then be applied in the calculation of the dose delivered to patients in their planning systems. In practice, high dose rate (HDR) Ir-192 afterloader machines are mostly used in brachytherapy treatment. Currently, HDR-Co-60 increasingly comes into operation too. The essential advantage of the use of Co-60 sources is its longer half-life compared to Ir-192. The use of HDRCo-60 afterloading machines is also quite interesting for developing countries. This work describes the dosimetry at HDR afterloading machines according to the protocols IAEA-TECDOC-1274 (2002) with the nuclides Ir-192 and Co-60. We have used 3 different measurement methods (with a ring chamber, with a solid phantom and in free air and with a well chamber) in dependence of each of the protocols. We have shown that the standard deviations of the measured air kerma rate for the Co-60 source are generally larger than those of the Ir-192 source. The measurements with the well chamber had the lowest deviation from the certificate value. In all protocols and methods, the deviations stood for both nuclides by a maximum of about 1% for Ir-192 and 2.5% for Co-60-Sources respectively.

Keywords: Ir-192 source, cancer, patients, cheap treatment cost

Procedia PDF Downloads 201
7590 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 404
7589 Early Installation Effect on the Machines’ Generated Vibration

Authors: Maitham Al-Safwani

Abstract:

Motor vibration issues were analyzed by several studies. It is generally accepted that vibration issues result from poor equipment installation. We had a water injection pump tested in the factory and exceeded the pump the vibration limit. Once the pump was brought to the site, its half-size shim plates were replaced with full-size shims plates that drastically reduced the vibration. In this study, vibration data was recorded for several similar motors run at the same and different speeds. The vibration values were recorded -for two and a half hours- and the vibration readings were analyzed to determine when the readings became consistent. This was as well supported by recording the audio noises produced by some machines seeking a relationship between changes in machine noises and machine abnormalities, such as vibration.

Keywords: vibration, noise, installation, machine

Procedia PDF Downloads 148
7588 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach

Authors: Chen-Yin Kuo, Yung-Hsin Lee

Abstract:

Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.

Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy

Procedia PDF Downloads 287
7587 Using Baculovirus Expression Vector System to Express Envelop Proteins of Chikungunya Virus in Insect Cells and Mammalian Cells

Authors: Tania Tzong, Chao-Yi Teng, Tzong-Yuan Wu

Abstract:

Currently, Chikungunya virus (CHIKV) transmitted to humans by Aedes mosquitoes has distributed from Africa to Southeast Asia, South America, and South Europe. However, little is known about the antigenic targets for immunity, and there are no licensed vaccines or specific antiviral treatments for the disease caused by CHIKV. Baculovirus has been recognized as a novel vaccine vector with attractive characteristic features of an optional vaccine delivery vehicle. This approach provides the safety and efficacy of CHIKV vaccine. In this study, bi-cistronic recombinant baculoviruses vAc-CMV-CHIKV26S-Rhir-EGFP and vAc-CMV-pH-CHIKV26S-Lir-EGFP were produced. Both recombinant baculovirus can express EGFP reporter gene in insect cells to facilitate the recombinant virus isolation and purification. Examination of vAc-CMV-CHIKV26S-Rhir-EGFP and vAc-CMV-pH-CHIKV26S-Lir-EGFP showed that this recombinant baculovirus could induce syncytium formation in insect cells. Unexpectedly, the immunofluorescence assay revealed the expression of E1 and E2 of CHIKV structural proteins in insect cells infected by vAc-CMV-CHIKV26S-Rhir-EGFP. This result may imply that the CMV promoter can induce the transcription of CHIKV26S in insect cells. There are also E1 and E2 expression in mammalian cells transduced by vAc-CMV-CHIKV26S-Rhir-EGFP and vAc-CMV-pH-CHIKV26S-Lir-EGFP. The expression of E1 and E2 proteins of insect and mammalian cells was validated again by Western blot analysis. The vector construction with dual tandem promoters, which is polyhedrin and CMV promoter, has higher expression of the E1 and E2 of CHIKV structural proteins than the vector construction with CMV promoter only. Most of the E1 and E2 proteins expressed in mammalian cells were glycosylated. In the future, the expression of structural proteins of CHIKV in mammalian cells is expected can form virus-like particle, so it could be used as a vaccine for chikungunya virus.

Keywords: chikungunya virus, virus-like particle, vaccines, baculovirus expression vector system

Procedia PDF Downloads 398
7586 Hearing Conservation Program for Vector Control Workers: Short-Term Outcomes from a Cluster-Randomized Controlled Trial

Authors: Rama Krishna Supramanian, Marzuki Isahak, Noran Naqiah Hairi

Abstract:

Noise-induced hearing loss (NIHL) is one of the highest recorded occupational diseases, despite being preventable. Hearing Conservation Program (HCP) is designed to protect workers hearing and prevent them from developing hearing impairment due to occupational noise exposures. However, there is still a lack of evidence regarding the effectiveness of this program. The purpose of this study was to determine the effectiveness of a Hearing Conservation Program (HCP) in preventing or reducing audiometric threshold changes among vector control workers. This study adopts a cluster randomized controlled trial study design, with district health offices as the unit of randomization. Nine district health offices were randomly selected and 183 vector control workers were randomized to intervention or control group. The intervention included a safety and health policy, noise exposure assessment, noise control, distribution of appropriate hearing protection devices, training and education program and audiometric testing. The control group only underwent audiometric testing. Audiometric threshold changes observed in the intervention group showed improvement in the hearing threshold level for all frequencies except 500 Hz and 8000 Hz for the left ear. The hearing threshold changes range from 1.4 dB to 5.2 dB with largest improvement at higher frequencies mainly 4000 Hz and 6000 Hz. Meanwhile for the right ear, the mean hearing threshold level remained similar at 4000 Hz and 6000 Hz after 3 months of intervention. The Hearing Conservation Program (HCP) is effective in preserving the hearing of vector control workers involved in fogging activity as well as increasing their knowledge, attitude and practice towards noise-induced hearing loss (NIHL).

Keywords: adult, hearing conservation program, noise-induced hearing loss, vector control worker

Procedia PDF Downloads 126
7585 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 217
7584 Stator Short-Circuits Fault Diagnosis in Induction Motors Using Extended Park’s Vector Approach through the Discrete Wavelet Transform

Authors: K. Yahia, A. Ghoggal, A. Titaouine, S. E. Zouzou, F. Benchabane

Abstract:

This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.

Keywords: Induction Motors (IMs), Inter-turn Short-Circuits Diagnosis, Discrete Wavelet Transform (DWT), Current Park’s Vector Modulus (CPVM)

Procedia PDF Downloads 531
7583 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 119
7582 Determining Current and Future Training Needs of Ontario Workers Supporting Persons with Developmental Disabilities

Authors: Erin C. Rodenburg, Jennifer McWhirter, Andrew Papadopoulos

Abstract:

Support workers for adults with developmental disabilities promote the care and wellbeing of a historically underserved population. Poor employment training and low work satisfaction for these disability support workers are linked to low productivity, poor quality of care, turnover, and intention to leave employment. Therefore, to improve the lives of those within disability support homes, both client and caregiver, it is vital to determine where improvements to training and support for those providing direct care can be made. The current study aims to explore disability support worker’s perceptions of the training received in their employment at the residential homes, how it prepared them for their role, and where there is room for improvement with the aim of developing recommendations for an improved training experience. Responses were collected from 85 disability support workers across 40 Ontario group homes. Findings suggest most disability support workers within the 40 support homes feel adequately trained in their responsibilities of employment. For those who did not feel adequately trained, the main issues expressed were a lack of standardization in training, a need for more continuous training, and a move away from trial and error in performing tasks to support clients with developmental disabilities.

Keywords: developmental disabilities, disability workers, support homes, training

Procedia PDF Downloads 157
7581 Effect of Key Parameters on Performances of an Adsorption Solar Cooling Machine

Authors: Allouache Nadia

Abstract:

Solid adsorption cooling machines have been extensively studied recently. They constitute very attractive solutions recover important amount of industrial waste heat medium temperature and to use renewable energy sources such as solar energy. The development of the technology of these machines can be carried out by experimental studies and by mathematical modelisation. This last method allows saving time and money because it is suppler to use to simulate the variation of different parameters. The adsorption cooling machines consist essentially of an evaporator, a condenser and a reactor (object of this work) containing a porous medium, which is in our case the activated carbon reacting by adsorption with ammoniac. The principle can be described as follows: When the adsorbent (at temperature T) is in exclusive contact with vapour of adsorbate (at pressure P), an amount of adsorbate is trapped inside the micro-pores in an almost liquid state. This adsorbed mass m, is a function of T and P according to a divariant equilibrium m=f (T,P). Moreover, at constant pressure, m decreases as T increases, and at constant adsorbed mass P increases with T. This makes it possible to imagine an ideal refrigerating cycle consisting of a period of heating/desorption/condensation followed by a period of cooling/adsorption/evaporation. Effect of key parameters on the machine performances are analysed and discussed.

Keywords: activated carbon-ammoniac pair, effect of key parameters, numerical modeling, solar cooling machine

Procedia PDF Downloads 231
7580 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks

Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft

Abstract:

Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: autonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 363
7579 Radar Fault Diagnosis Strategy Based on Deep Learning

Authors: Bin Feng, Zhulin Zong

Abstract:

Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.

Keywords: radar system, fault diagnosis, deep learning, radar fault

Procedia PDF Downloads 52
7578 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 126
7577 Stress and Social Support as Predictors of Quality of Life: A Case among Flood Victims in Malaysia

Authors: Najib Ahmad Marzuki, Che Su Mustaffa, Johana Johari, Nur Haffiza Rahaman

Abstract:

The purpose of this paper is to examine the effects and relationship of stress and social support towards the quality of life among flood victims in Malaysia. A total of 764 respondents took part in the survey via random sampling. The depression, anxiety, and stress scales were utilized to measure stress while The Multidimensional Scale of Perceived Social Support was used to measure the quality of life. The findings of this study indicate that there were significant correlations between variables in the study. The findings show a significant negative relation between stress and quality of life, and significant positive correlations between support from family as well as support from friends with the quality of life. Stress and support from family were found to be significant predictors and influences the quality of life among flood victims.

Keywords: stress, social support, quality of life, flood victims

Procedia PDF Downloads 513
7576 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 274
7575 Factory Virtual Environment Development for Augmented and Virtual Reality

Authors: Michal Gregor, Jiri Polcar, Petr Horejsi, Michal Simon

Abstract:

Machine visualization is an area of interest with fast and progressive development. We present a method of machine visualization which will be applicable in real industrial conditions according to current needs and demands. Real factory data were obtained in a newly built research plant. Methods described in this paper were validated on a case study. Input data were processed and the virtual environment was created. The environment contains information about dimensions, structure, disposition, and function. Hardware was enhanced by modular machines, prototypes, and accessories. We added new functionalities and machines into the virtual environment. The user is able to interact with objects such as testing and cutting machines, he/she can operate and move them. Proposed design consists of an environment with two degrees of freedom of movement. Users are in touch with items in the virtual world which are embedded into the real surroundings. This paper describes the development of the virtual environment. We compared and tested various options of factory layout virtualization and visualization. We analyzed possibilities of using a 3D scanner in the layout obtaining process and we also analyzed various virtual reality hardware visualization methods such as Stereoscopic (CAVE) projection, Head Mounted Display (HMD), and augmented reality (AR) projection provided by see-through glasses.

Keywords: augmented reality, spatial scanner, virtual environment, virtual reality

Procedia PDF Downloads 377