Search results for: probabilistic classification vector machines
3352 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model
Procedia PDF Downloads 2073351 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children
Authors: Norah Mohammed Alshahrani, Abdulaziz Almaleh
Abstract:
Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD and they are Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then Feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by the Support Vector Machine (SVM), achieving 0.98% in the toddler dataset and 0.99% in the children dataset.Keywords: autism spectrum disorder, machine learning, feature selection, support vector machine
Procedia PDF Downloads 1503350 Hybrid Approach for Country’s Performance Evaluation
Authors: C. Slim
Abstract:
This paper presents an integrated model, which hybridized data envelopment analysis (DEA) and support vector machine (SVM) together, to class countries according to their efficiency and performance. This model takes into account aspects of multi-dimensional indicators, decision-making hierarchy and relativity of measurement. Starting from a set of indicators of performance as exhaustive as possible, a process of successive aggregations has been developed to attain an overall evaluation of a country’s competitiveness.Keywords: Artificial Neural Networks (ANN), Support vector machine (SVM), Data Envelopment Analysis (DEA), Aggregations, indicators of performance
Procedia PDF Downloads 3383349 Difference between 'HDR Ir-192 and Co-60 Sources' for High Dose Rate Brachytherapy Machine
Authors: Md Serajul Islam
Abstract:
High Dose Rate (HDR) Brachytherapy is used for cancer patients. In our country’s prospect, we are using only cervices and breast cancer treatment by using HDR. The air kerma rate in air at a reference distance of less than a meter from the source is the recommended quantity for the specification of gamma ray source Ir-192 in brachytherapy. The absorbed dose for the patients is directly proportional to the air kerma rate. Therefore the air kerma rate should be determined before the first use of the source on patients by qualified medical physicist who is independent from the source manufacturer. The air kerma rate will then be applied in the calculation of the dose delivered to patients in their planning systems. In practice, high dose rate (HDR) Ir-192 afterloader machines are mostly used in brachytherapy treatment. Currently, HDR-Co-60 increasingly comes into operation too. The essential advantage of the use of Co-60 sources is its longer half-life compared to Ir-192. The use of HDRCo-60 afterloading machines is also quite interesting for developing countries. This work describes the dosimetry at HDR afterloading machines according to the protocols IAEA-TECDOC-1274 (2002) with the nuclides Ir-192 and Co-60. We have used 3 different measurement methods (with a ring chamber, with a solid phantom and in free air and with a well chamber) in dependence of each of the protocols. We have shown that the standard deviations of the measured air kerma rate for the Co-60 source are generally larger than those of the Ir-192 source. The measurements with the well chamber had the lowest deviation from the certificate value. In all protocols and methods, the deviations stood for both nuclides by a maximum of about 1% for Ir-192 and 2.5% for Co-60-Sources respectively.Keywords: Ir-192 source, cancer, patients, cheap treatment cost
Procedia PDF Downloads 2363348 Early Installation Effect on the Machines’ Generated Vibration
Authors: Maitham Al-Safwani
Abstract:
Motor vibration issues were analyzed by several studies. It is generally accepted that vibration issues result from poor equipment installation. We had a water injection pump tested in the factory and exceeded the pump the vibration limit. Once the pump was brought to the site, its half-size shim plates were replaced with full-size shims plates that drastically reduced the vibration. In this study, vibration data was recorded for several similar motors run at the same and different speeds. The vibration values were recorded -for two and a half hours- and the vibration readings were analyzed to determine when the readings became consistent. This was as well supported by recording the audio noises produced by some machines seeking a relationship between changes in machine noises and machine abnormalities, such as vibration.Keywords: vibration, noise, installation, machine
Procedia PDF Downloads 1823347 Evaluation of Robust Feature Descriptors for Texture Classification
Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo
Abstract:
Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.Keywords: texture classification, texture descriptor, SIFT, SURF, ORB
Procedia PDF Downloads 3693346 Neuro-Fuzzy Based Model for Phrase Level Emotion Understanding
Authors: Vadivel Ayyasamy
Abstract:
The present approach deals with the identification of Emotions and classification of Emotional patterns at Phrase-level with respect to Positive and Negative Orientation. The proposed approach considers emotion triggered terms, its co-occurrence terms and also associated sentences for recognizing emotions. The proposed approach uses Part of Speech Tagging and Emotion Actifiers for classification. Here sentence patterns are broken into phrases and Neuro-Fuzzy model is used to classify which results in 16 patterns of emotional phrases. Suitable intensities are assigned for capturing the degree of emotion contents that exist in semantics of patterns. These emotional phrases are assigned weights which supports in deciding the Positive and Negative Orientation of emotions. The approach uses web documents for experimental purpose and the proposed classification approach performs well and achieves good F-Scores.Keywords: emotions, sentences, phrases, classification, patterns, fuzzy, positive orientation, negative orientation
Procedia PDF Downloads 3783345 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece
Authors: N. Samarinas, C. Evangelides, C. Vrekos
Abstract:
The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.Keywords: classification, fuzzy logic, tolerance relations, rainfall data
Procedia PDF Downloads 3143344 [Keynote Talk]: sEMG Interface Design for Locomotion Identification
Authors: Rohit Gupta, Ravinder Agarwal
Abstract:
Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.Keywords: classifiers, feature selection, locomotion, sEMG
Procedia PDF Downloads 2933343 0.13-μm CMOS Vector Modulator for Wireless Backhaul System
Authors: J. S. Kim, N. P. Hong
Abstract:
In this paper, a CMOS vector modulator designed for wireless backhaul system based on 802.11ac is presented. A poly phase filter and sign select switches yield two orthogonal signal paths. Two variable gain amplifiers with strongly reduced phase shift of only ±5 ° are used to weight these paths. It has a phase control range of 360 ° and a gain range of -10 dB to 10 dB. The current drawn from a 1.2 V supply amounts 20.4 mA. Using a 0.13 mm technology, the chip die area amounts 1.47x0.75 mm².Keywords: CMOS, phase shifter, backhaul, 802.11ac
Procedia PDF Downloads 3863342 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy
Authors: Ye Zhang, Chuanjun Liu
Abstract:
In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.Keywords: composite structure, damage tolerance, impact threat, probabilistic
Procedia PDF Downloads 3083341 Efficient Schemes of Classifiers for Remote Sensing Satellite Imageries of Land Use Pattern Classifications
Authors: S. S. Patil, Sachidanand Kini
Abstract:
Classification of land use patterns is compelling in complexity and variability of remote sensing imageries data. An imperative research in remote sensing application exploited to mine some of the significant spatially variable factors as land cover and land use from satellite images for remote arid areas in Karnataka State, India. The diverse classification techniques, unsupervised and supervised consisting of maximum likelihood, Mahalanobis distance, and minimum distance are applied in Bellary District in Karnataka State, India for the classification of the raw satellite images. The accuracy evaluations of results are compared visually with the standard maps with ground-truths. We initiated with the maximum likelihood technique that gave the finest results and both minimum distance and Mahalanobis distance methods over valued agriculture land areas. In meanness of mislaid few irrelevant features due to the low resolution of the satellite images, high-quality accord between parameters extracted automatically from the developed maps and field observations was found.Keywords: Mahalanobis distance, minimum distance, supervised, unsupervised, user classification accuracy, producer's classification accuracy, maximum likelihood, kappa coefficient
Procedia PDF Downloads 1833340 A Clustering Algorithm for Massive Texts
Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen
Abstract:
Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process
Procedia PDF Downloads 4353339 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.Keywords: cancer classification, feature selection, deep learning, genetic algorithm
Procedia PDF Downloads 1113338 A Robust Spatial Feature Extraction Method for Facial Expression Recognition
Authors: H. G. C. P. Dinesh, G. Tharshini, M. P. B. Ekanayake, G. M. R. I. Godaliyadda
Abstract:
This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods.Keywords: facial expression recognition, principle component analysis (PCA), fisher discernment analysis (FDA), eigen-filter, cosine similarity, bayesian classifier, f-measure
Procedia PDF Downloads 4253337 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach
Authors: Chen-Yin Kuo, Yung-Hsin Lee
Abstract:
Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy
Procedia PDF Downloads 3163336 Effect of Key Parameters on Performances of an Adsorption Solar Cooling Machine
Authors: Allouache Nadia
Abstract:
Solid adsorption cooling machines have been extensively studied recently. They constitute very attractive solutions recover important amount of industrial waste heat medium temperature and to use renewable energy sources such as solar energy. The development of the technology of these machines can be carried out by experimental studies and by mathematical modelisation. This last method allows saving time and money because it is suppler to use to simulate the variation of different parameters. The adsorption cooling machines consist essentially of an evaporator, a condenser and a reactor (object of this work) containing a porous medium, which is in our case the activated carbon reacting by adsorption with ammoniac. The principle can be described as follows: When the adsorbent (at temperature T) is in exclusive contact with vapour of adsorbate (at pressure P), an amount of adsorbate is trapped inside the micro-pores in an almost liquid state. This adsorbed mass m, is a function of T and P according to a divariant equilibrium m=f (T,P). Moreover, at constant pressure, m decreases as T increases, and at constant adsorbed mass P increases with T. This makes it possible to imagine an ideal refrigerating cycle consisting of a period of heating/desorption/condensation followed by a period of cooling/adsorption/evaporation. Effect of key parameters on the machine performances are analysed and discussed.Keywords: activated carbon-ammoniac pair, effect of key parameters, numerical modeling, solar cooling machine
Procedia PDF Downloads 2543335 Using Baculovirus Expression Vector System to Express Envelop Proteins of Chikungunya Virus in Insect Cells and Mammalian Cells
Authors: Tania Tzong, Chao-Yi Teng, Tzong-Yuan Wu
Abstract:
Currently, Chikungunya virus (CHIKV) transmitted to humans by Aedes mosquitoes has distributed from Africa to Southeast Asia, South America, and South Europe. However, little is known about the antigenic targets for immunity, and there are no licensed vaccines or specific antiviral treatments for the disease caused by CHIKV. Baculovirus has been recognized as a novel vaccine vector with attractive characteristic features of an optional vaccine delivery vehicle. This approach provides the safety and efficacy of CHIKV vaccine. In this study, bi-cistronic recombinant baculoviruses vAc-CMV-CHIKV26S-Rhir-EGFP and vAc-CMV-pH-CHIKV26S-Lir-EGFP were produced. Both recombinant baculovirus can express EGFP reporter gene in insect cells to facilitate the recombinant virus isolation and purification. Examination of vAc-CMV-CHIKV26S-Rhir-EGFP and vAc-CMV-pH-CHIKV26S-Lir-EGFP showed that this recombinant baculovirus could induce syncytium formation in insect cells. Unexpectedly, the immunofluorescence assay revealed the expression of E1 and E2 of CHIKV structural proteins in insect cells infected by vAc-CMV-CHIKV26S-Rhir-EGFP. This result may imply that the CMV promoter can induce the transcription of CHIKV26S in insect cells. There are also E1 and E2 expression in mammalian cells transduced by vAc-CMV-CHIKV26S-Rhir-EGFP and vAc-CMV-pH-CHIKV26S-Lir-EGFP. The expression of E1 and E2 proteins of insect and mammalian cells was validated again by Western blot analysis. The vector construction with dual tandem promoters, which is polyhedrin and CMV promoter, has higher expression of the E1 and E2 of CHIKV structural proteins than the vector construction with CMV promoter only. Most of the E1 and E2 proteins expressed in mammalian cells were glycosylated. In the future, the expression of structural proteins of CHIKV in mammalian cells is expected can form virus-like particle, so it could be used as a vaccine for chikungunya virus.Keywords: chikungunya virus, virus-like particle, vaccines, baculovirus expression vector system
Procedia PDF Downloads 4233334 Reliability Assessment Using Full Probabilistic Modelling for Carbonation and Chloride Exposures, Including Initiation and Propagation Periods
Authors: Frank Papworth, Inam Khan
Abstract:
Fib’s model code 2020 has four approaches for design life verification. Historically ‘deemed to satisfy provisions have been the principal approach, but this has limited options for materials and covers. The use of an equation in fib’s model code for service life design to predict time to corrosion initiation has become increasingly popular to justify further options, but in some cases, the analysis approaches are incorrect. Even when the equations are computed using full probabilistic analysis, there are common mistakes. This paper reviews the work of recent fib commissions on implementing the service life model to assess the reliability of durability designs, including initiation and propagation periods. The paper goes on to consider the assessment of deemed to satisfy requirements in national codes and considers the influence of various options, including different steel types, various cement systems, quality of concrete and cover, on reliability achieved. As modelling is based on achieving agreed target reliability, consideration is given to how a project might determine appropriate target reliability.Keywords: chlorides, marine, exposure, design life, reliability, modelling
Procedia PDF Downloads 2353333 Hearing Conservation Program for Vector Control Workers: Short-Term Outcomes from a Cluster-Randomized Controlled Trial
Authors: Rama Krishna Supramanian, Marzuki Isahak, Noran Naqiah Hairi
Abstract:
Noise-induced hearing loss (NIHL) is one of the highest recorded occupational diseases, despite being preventable. Hearing Conservation Program (HCP) is designed to protect workers hearing and prevent them from developing hearing impairment due to occupational noise exposures. However, there is still a lack of evidence regarding the effectiveness of this program. The purpose of this study was to determine the effectiveness of a Hearing Conservation Program (HCP) in preventing or reducing audiometric threshold changes among vector control workers. This study adopts a cluster randomized controlled trial study design, with district health offices as the unit of randomization. Nine district health offices were randomly selected and 183 vector control workers were randomized to intervention or control group. The intervention included a safety and health policy, noise exposure assessment, noise control, distribution of appropriate hearing protection devices, training and education program and audiometric testing. The control group only underwent audiometric testing. Audiometric threshold changes observed in the intervention group showed improvement in the hearing threshold level for all frequencies except 500 Hz and 8000 Hz for the left ear. The hearing threshold changes range from 1.4 dB to 5.2 dB with largest improvement at higher frequencies mainly 4000 Hz and 6000 Hz. Meanwhile for the right ear, the mean hearing threshold level remained similar at 4000 Hz and 6000 Hz after 3 months of intervention. The Hearing Conservation Program (HCP) is effective in preserving the hearing of vector control workers involved in fogging activity as well as increasing their knowledge, attitude and practice towards noise-induced hearing loss (NIHL).Keywords: adult, hearing conservation program, noise-induced hearing loss, vector control worker
Procedia PDF Downloads 1663332 Composite Approach to Extremism and Terrorism Web Content Classification
Authors: Kolade Olawande Owoeye, George Weir
Abstract:
Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.Keywords: sentiposit, classification, extremism, terrorism
Procedia PDF Downloads 2783331 Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric
Authors: Geetika Barman, B. S. Daya Sagar
Abstract:
In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them.Keywords: dilation distance, erosion distance, hyperspectral image classification, mathematical morphology
Procedia PDF Downloads 863330 Seismic Hazard Assessment of Tehran
Authors: Dorna Kargar, Mehrasa Masih
Abstract:
Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters
Procedia PDF Downloads 693329 Crop Recommendation System Using Machine Learning
Authors: Prathik Ranka, Sridhar K, Vasanth Daniel, Mithun Shankar
Abstract:
With growing global food needs and climate uncertainties, informed crop choices are critical for increasing agricultural productivity. Here we propose a machine learning-based crop recommendation system to help farmers in choosing the most proper crops according to their geographical regions and soil properties. We can deploy algorithms like Decision Trees, Random Forests and Support Vector Machines on a broad dataset that consists of climatic factors, soil characteristics and historical crop yields to predict the best choice of crops. The approach includes first preprocessing the data after assessing them for missing values, unlike in previous jobs where we used all the available information and then transformed because there was no way such a model could have worked with missing data, and normalizing as throughput that will be done over a network to get best results out of our machine learning division. The model effectiveness is measured through performance metrics like accuracy, precision and recall. The resultant app provides a farmer-friendly dashboard through which farmers can enter their local conditions and receive individualized crop suggestions.Keywords: crop recommendation, precision agriculture, crop, machine learning
Procedia PDF Downloads 143328 A Mixture Vine Copula Structures Model for Dependence Wind Speed among Wind Farms and Its Application in Reactive Power Optimization
Authors: Yibin Qiu, Yubo Ouyang, Shihan Li, Guorui Zhang, Qi Li, Weirong Chen
Abstract:
This paper aims at exploring the impacts of high dimensional dependencies of wind speed among wind farms on probabilistic optimal power flow. To obtain the reactive power optimization faster and more accurately, a mixture vine Copula structure model combining the K-means clustering, C vine copula and D vine copula is proposed in this paper, through which a more accurate correlation model can be obtained. Moreover, a Modified Backtracking Search Algorithm (MBSA), the three-point estimate method is applied to probabilistic optimal power flow. The validity of the mixture vine copula structure model and the MBSA are respectively tested in IEEE30 node system with measured data of 3 adjacent wind farms in a certain area, and the results indicate effectiveness of these methods.Keywords: mixture vine copula structure model, three-point estimate method, the probability integral transform, modified backtracking search algorithm, reactive power optimization
Procedia PDF Downloads 2483327 Classification of Red, Green and Blue Values from Face Images Using k-NN Classifier to Predict the Skin or Non-Skin
Authors: Kemal Polat
Abstract:
In this study, it has been estimated whether there is skin by using RBG values obtained from the camera and k-nearest neighbor (k-NN) classifier. The dataset used in this study has an unbalanced distribution and a linearly non-separable structure. This problem can also be called a big data problem. The Skin dataset was taken from UCI machine learning repository. As the classifier, we have used the k-NN method to handle this big data problem. For k value of k-NN classifier, we have used as 1. To train and test the k-NN classifier, 50-50% training-testing partition has been used. As the performance metrics, TP rate, FP Rate, Precision, recall, f-measure and AUC values have been used to evaluate the performance of k-NN classifier. These obtained results are as follows: 0.999, 0.001, 0.999, 0.999, 0.999, and 1,00. As can be seen from the obtained results, this proposed method could be used to predict whether the image is skin or not.Keywords: k-NN classifier, skin or non-skin classification, RGB values, classification
Procedia PDF Downloads 2483326 The Influence of Design Complexity of a Building Structure on the Expected Performance
Authors: Ormal Lishi
Abstract:
This research presents a computationally efficient probabilistic method to assess the performance of compartmentation walls with similar Fire Resistance Levels (FRL) but varying complexity. Specifically, a masonry brick wall and a light-steel framed (LSF) wall with comparable insulation performance are analyzed. A Monte Carlo technique, employing Latin Hypercube Sampling (LHS), is utilized to quantify uncertainties and determine the probability of failure for both walls exposed to standard and parametric fires, following ISO 834 and Eurocodes guidelines. Results show that the probability of failure for the brick masonry wall under standard fire exposure is estimated at 4.8%, while the LSF wall is 7.6%. These probabilities decrease to 0.4% and 4.8%, respectively, when subjected to parametric fires. Notably, the complex LSF wall exhibits higher variability in predicting time to failure for specific criteria compared to the less complex brick wall, especially at higher temperatures. The proposed approach highlights the need for Probabilistic Risk Assessment (PRA) to accurately evaluate the reliability and safety levels of complex designs.Keywords: design complexity, probability of failure, monte carlo analysis, compartmentation walls, insulation
Procedia PDF Downloads 643325 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram
Abstract:
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification
Procedia PDF Downloads 2973324 Stator Short-Circuits Fault Diagnosis in Induction Motors Using Extended Park’s Vector Approach through the Discrete Wavelet Transform
Authors: K. Yahia, A. Ghoggal, A. Titaouine, S. E. Zouzou, F. Benchabane
Abstract:
This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.Keywords: Induction Motors (IMs), Inter-turn Short-Circuits Diagnosis, Discrete Wavelet Transform (DWT), Current Park’s Vector Modulus (CPVM)
Procedia PDF Downloads 5633323 Factory Virtual Environment Development for Augmented and Virtual Reality
Authors: Michal Gregor, Jiri Polcar, Petr Horejsi, Michal Simon
Abstract:
Machine visualization is an area of interest with fast and progressive development. We present a method of machine visualization which will be applicable in real industrial conditions according to current needs and demands. Real factory data were obtained in a newly built research plant. Methods described in this paper were validated on a case study. Input data were processed and the virtual environment was created. The environment contains information about dimensions, structure, disposition, and function. Hardware was enhanced by modular machines, prototypes, and accessories. We added new functionalities and machines into the virtual environment. The user is able to interact with objects such as testing and cutting machines, he/she can operate and move them. Proposed design consists of an environment with two degrees of freedom of movement. Users are in touch with items in the virtual world which are embedded into the real surroundings. This paper describes the development of the virtual environment. We compared and tested various options of factory layout virtualization and visualization. We analyzed possibilities of using a 3D scanner in the layout obtaining process and we also analyzed various virtual reality hardware visualization methods such as Stereoscopic (CAVE) projection, Head Mounted Display (HMD), and augmented reality (AR) projection provided by see-through glasses.Keywords: augmented reality, spatial scanner, virtual environment, virtual reality
Procedia PDF Downloads 407