Search results for: sampling algorithms
4570 Face Recognition Using Body-Worn Camera: Dataset and Baseline Algorithms
Authors: Ali Almadan, Anoop Krishnan, Ajita Rattani
Abstract:
Facial recognition is a widely adopted technology in surveillance, border control, healthcare, banking services, and lately, in mobile user authentication with Apple introducing “Face ID” moniker with iPhone X. A lot of research has been conducted in the area of face recognition on datasets captured by surveillance cameras, DSLR, and mobile devices. Recently, face recognition technology has also been deployed on body-worn cameras to keep officers safe, enabling situational awareness and providing evidence for trial. However, limited academic research has been conducted on this topic so far, without the availability of any publicly available datasets with a sufficient sample size. This paper aims to advance research in the area of face recognition using body-worn cameras. To this aim, the contribution of this work is two-fold: (1) collection of a dataset consisting of a total of 136,939 facial images of 102 subjects captured using body-worn cameras in in-door and daylight conditions and (2) evaluation of various deep-learning architectures for face identification on the collected dataset. Experimental results suggest a maximum True Positive Rate(TPR) of 99.86% at False Positive Rate(FPR) of 0.000 obtained by SphereFace based deep learning architecture in daylight condition. The collected dataset and the baseline algorithms will promote further research and development. A downloadable link of the dataset and the algorithms is available by contacting the authors.Keywords: face recognition, body-worn cameras, deep learning, person identification
Procedia PDF Downloads 1634569 Evaluation of a Risk Assessment Method for Fiber Emissions from Sprayed Asbestos-Containing Materials
Authors: Yukinori Fuse, Masato Kawaguchi
Abstract:
A quantitative risk assessment method was developed for fiber emissions from sprayed asbestos-containing materials (ACMs). In Japan, instead of being quantitative, these risk assessments have relied on the subjective judgment of skilled engineers, which may vary from one person to another. Therefore, this closed sampling method aims at avoiding any potential variability between assessments. This method was used to assess emissions from ACM sprayed in eleven buildings and the obtained results were compared with the subjective judgments of a skilled engineer. An approximate correlation tendency was found between both approaches. In spite of existing uncertainties, the closed sampling method is useful for public health protection. We firmly believe that this method may find application in the management and renovation decisions of buildings using friable and sprayed ACM.Keywords: asbestos, renovation, risk assessment, maintenance
Procedia PDF Downloads 3794568 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing
Procedia PDF Downloads 1804567 Endometrial Biopsy Curettage vs Endometrial Aspiration: Better Modality in Female Genital Tuberculosis
Authors: Rupali Bhatia, Deepthi Nair, Geetika Khanna, Seema Singhal
Abstract:
Introduction: Genital tract tuberculosis is a chronic disease (caused by reactivation of organisms from systemic distribution of Mycobacterium tuberculosis) that often presents with low grade symptoms and non-specific complaints. Patients with genital tuberculosis are usually young women seeking workup and treatment for infertility. Infertility is the commonest presentation due to involvement of the fallopian tubes, endometrium and ovarian damage with poor ovarian volume and reserve. The diagnosis of genital tuberculosis is difficult because of the fact that it is a silent invader of genital tract. Since tissue cannot be obtained from fallopian tubes, the diagnosis is made by isolation of bacilli from endometrial tissue obtained by endometrial biopsy curettage and/or aspiration. Problems are associated with sampling technique as well as diagnostic modality due to lack of adequate sample volumes and the segregation of the sample for various diagnostic tests resulting in non-uniform distribution of microorganisms. Moreover, lack of an efficient sampling technique universally applicable for all specific diagnostic tests contributes to the diagnostic challenges. Endometrial sampling plays a key role in accurate diagnosis of female genital tuberculosis. It may be done by 2 methods viz. endometrial curettage and endometrial aspiration. Both endometrial curettage and aspirate have their own limitations as curettage picks up strip of the endometrium from one of the walls of the uterine cavity including tubal osteal areas whereas aspirate obtains total tissue with exfoliated cells present in the secretory fluid of the endometrial cavity. Further, sparse and uneven distribution of the bacilli remains a major factor contributing to the limitations of the techniques. The sample that is obtained by either technique is subjected to histopathological examination, AFB staining, culture and PCR. Aim: Comparison of the sampling techniques viz. endometrial biopsy curettage and endometrial aspiration using different laboratory methods of histopathology, cytology, microbiology and molecular biology. Method: In a hospital based observational study, 75 Indian females suspected of genital tuberculosis were selected on the basis of inclusion criteria. The women underwent endometrial tissue sampling using Novaks biopsy curette and Karmans cannula. One part of the specimen obtained was sent in formalin solution for histopathological testing and another part was sent in normal saline for acid fast bacilli smear, culture and polymerase chain reaction. The results so obtained were correlated using coefficient of correlation and chi square test. Result: Concordance of results showed moderate agreement between both the sampling techniques. Among HPE, AFB and PCR, maximum sensitivity was observed for PCR, though the specificity was not as high as other techniques. Conclusion: Statistically no significant difference was observed between the results obtained by the two sampling techniques. Therefore, one may use either EA or EB to obtain endometrial samples and avoid multiple sampling as both the techniques are equally efficient in diagnosing genital tuberculosis by HPE, AFB, culture or PCR.Keywords: acid fast bacilli (AFB), histopatholgy examination (HPE), polymerase chain reaction (PCR), endometrial biopsy curettage
Procedia PDF Downloads 3284566 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality
Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye
Abstract:
When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.Keywords: word embeddings, k-mer embedding, dimensionality reduction
Procedia PDF Downloads 1404565 Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy
Authors: Tatiana A. Dolenko, Sergey A. Burikov, Alexander O. Efitorov, Sergey A. Dolenko
Abstract:
In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.Keywords: inverse problems, multi-component solutions, neural networks, Raman spectroscopy
Procedia PDF Downloads 5294564 Documents Emotions Classification Model Based on TF-IDF Weighting Measure
Authors: Amr Mansour Mohsen, Hesham Ahmed Hassan, Amira M. Idrees
Abstract:
Emotions classification of text documents is applied to reveal if the document expresses a determined emotion from its writer. As different supervised methods are previously used for emotion documents’ classification, in this research we present a novel model that supports the classification algorithms for more accurate results by the support of TF-IDF measure. Different experiments have been applied to reveal the applicability of the proposed model, the model succeeds in raising the accuracy percentage according to the determined metrics (precision, recall, and f-measure) based on applying the refinement of the lexicon, integration of lexicons using different perspectives, and applying the TF-IDF weighting measure over the classifying features. The proposed model has also been compared with other research to prove its competence in raising the results’ accuracy.Keywords: emotion detection, TF-IDF, WEKA tool, classification algorithms
Procedia PDF Downloads 4844563 Electroencephalogram Based Alzheimer Disease Classification using Machine and Deep Learning Methods
Authors: Carlos Roncero-Parra, Alfonso Parreño-Torres, Jorge Mateo Sotos, Alejandro L. Borja
Abstract:
In this research, different methods based on machine/deep learning algorithms are presented for the classification and diagnosis of patients with mental disorders such as alzheimer. For this purpose, the signals obtained from 32 unipolar electrodes identified by non-invasive EEG were examined, and their basic properties were obtained. More specifically, different well-known machine learning based classifiers have been used, i.e., support vector machine (SVM), Bayesian linear discriminant analysis (BLDA), decision tree (DT), Gaussian Naïve Bayes (GNB), K-nearest neighbor (KNN) and Convolutional Neural Network (CNN). A total of 668 patients from five different hospitals have been studied in the period from 2011 to 2021. The best accuracy is obtained was around 93 % in both ADM and ADA classifications. It can be concluded that such a classification will enable the training of algorithms that can be used to identify and classify different mental disorders with high accuracy.Keywords: alzheimer, machine learning, deep learning, EEG
Procedia PDF Downloads 1294562 Vibroacoustic Modulation with Chirp Signal
Authors: Dong Liu
Abstract:
By sending a high-frequency probe wave and a low-frequency pump wave to a specimen, the vibroacoustic method evaluates the defect’s severity according to the modulation index of the received signal. Many studies experimentally proved the significant sensitivity of the modulation index to the tiny contact type defect. However, it has also been found that the modulation index was highly affected by the frequency of probe or pump waves. Therefore, the chirp signal has been introduced to the VAM method since it can assess multiple frequencies in a relatively short time duration, so the robustness of the VAM method could be enhanced. Consequently, the signal processing method needs to be modified accordingly. Various studies utilized different algorithms or combinations of algorithms for processing the VAM signal method by chirp excitation. These signal process methods were compared and used for processing a VAM signal acquired from the steel samples.Keywords: vibroacoustic modulation, nonlinear acoustic modulation, nonlinear acoustic NDT&E, signal processing, structural health monitoring
Procedia PDF Downloads 994561 Business and Psychological Principles Integrated into Automated Capital Investment Systems through Mathematical Algorithms
Authors: Cristian Pauna
Abstract:
With few steps away from the 2020, investments in financial markets is a common activity nowadays. In the electronic trading environment, the automated investment software has become a major part in the business intelligence system of any modern financial company. The investment decisions are assisted and/or made automatically by computers using mathematical algorithms today. The complexity of these algorithms requires computer assistance in the investment process. This paper will present several investment strategies that can be automated with algorithmic trading for Deutscher Aktienindex DAX30. It was found that, based on several price action mathematical models used for high-frequency trading some investment strategies can be optimized and improved for automated investments with good results. This paper will present the way to automate these investment decisions. Automated signals will be built using all of these strategies. Three major types of investment strategies were found in this study. The types are separated by the target length and by the exit strategy used. The exit decisions will be also automated and the paper will present the specificity for each investment type. A comparative study will be also included in this paper in order to reveal the differences between strategies. Based on these results, the profit and the capital exposure will be compared and analyzed in order to qualify the investment methodologies presented and to compare them with any other investment system. As conclusion, some major investment strategies will be revealed and compared in order to be considered for inclusion in any automated investment system.Keywords: Algorithmic trading, automated investment systems, limit conditions, trading principles, trading strategies
Procedia PDF Downloads 1944560 Intellectual Property in Digital Environment
Authors: Balamurugan L.
Abstract:
Artificial intelligence (AI) and its applications in Intellectual Property Rights (IPR) has been significantly growing in recent years. In last couple of years, AI tools for Patent Research and Patent Analytics have been well-stabilized in terms of accuracy of references and representation of identified patent insights. However, AI tools for Patent Prosecution and Patent Litigation are still in the nascent stage and there may be a significant potential if such market is explored further. Our research is primarily focused on identifying potential whitespaces and schematic algorithms to automate the Patent Prosecution and Patent Litigation Process of the Intellectual Property. The schematic algorithms may assist leading AI tool developers, to explore such opportunities in the field of Intellectual Property. Our research is also focused on identification of pitfalls of the AI. For example, Information Security and its impact in IPR, and Potential remediations to sustain the IPR in the digital environment.Keywords: artificial intelligence, patent analytics, patent drafting, patent litigation, patent prosecution, patent research
Procedia PDF Downloads 674559 EEG-Based Screening Tool for School Student’s Brain Disorders Using Machine Learning Algorithms
Authors: Abdelrahman A. Ramzy, Bassel S. Abdallah, Mohamed E. Bahgat, Sarah M. Abdelkader, Sherif H. ElGohary
Abstract:
Attention-Deficit/Hyperactivity Disorder (ADHD), epilepsy, and autism affect millions of children worldwide, many of which are undiagnosed despite the fact that all of these disorders are detectable in early childhood. Late diagnosis can cause severe problems due to the late treatment and to the misconceptions and lack of awareness as a whole towards these disorders. Moreover, electroencephalography (EEG) has played a vital role in the assessment of neural function in children. Therefore, quantitative EEG measurement will be utilized as a tool for use in the evaluation of patients who may have ADHD, epilepsy, and autism. We propose a screening tool that uses EEG signals and machine learning algorithms to detect these disorders at an early age in an automated manner. The proposed classifiers used with epilepsy as a step taken for the work done so far, provided an accuracy of approximately 97% using SVM, Naïve Bayes and Decision tree, while 98% using KNN, which gives hope for the work yet to be conducted.Keywords: ADHD, autism, epilepsy, EEG, SVM
Procedia PDF Downloads 1924558 An Algorithm to Depreciate the Energy Utilization Using a Bio-Inspired Method in Wireless Sensor Network
Authors: Navdeep Singh Randhawa, Shally Sharma
Abstract:
Wireless Sensor Network is an autonomous technology emanating in the current scenario at a fast pace. This technology faces a number of defiance’s and energy management is one of them, which has a huge impact on the network lifetime. To sustain energy the different types of routing protocols have been flourished. The classical routing protocols are no more compatible to perform in complicated environments. Hence, in the field of routing the intelligent algorithms based on nature systems is a turning point in Wireless Sensor Network. These nature-based algorithms are quite efficient to handle the challenges of the WSN as they are capable of achieving local and global best optimization solutions for the complex environments. So, the main attention of this paper is to develop a routing algorithm based on some swarm intelligent technique to enhance the performance of Wireless Sensor Network.Keywords: wireless sensor network, routing, swarm intelligence, MPRSO
Procedia PDF Downloads 3534557 Hybrid Hierarchical Clustering Approach for Community Detection in Social Network
Authors: Radhia Toujani, Jalel Akaichi
Abstract:
Social Networks generally present a hierarchy of communities. To determine these communities and the relationship between them, detection algorithms should be applied. Most of the existing algorithms, proposed for hierarchical communities identification, are based on either agglomerative clustering or divisive clustering. In this paper, we present a hybrid hierarchical clustering approach for community detection based on both bottom-up and bottom-down clustering. Obviously, our approach provides more relevant community structure than hierarchical method which considers only divisive or agglomerative clustering to identify communities. Moreover, we performed some comparative experiments to enhance the quality of the clustering results and to show the effectiveness of our algorithm.Keywords: agglomerative hierarchical clustering, community structure, divisive hierarchical clustering, hybrid hierarchical clustering, opinion mining, social network, social network analysis
Procedia PDF Downloads 3664556 Internet of Things: Route Search Optimization Applying Ant Colony Algorithm and Theory of Computer Science
Authors: Tushar Bhardwaj
Abstract:
Internet of Things (IoT) possesses a dynamic network where the network nodes (mobile devices) are added and removed constantly and randomly, hence the traffic distribution in the network is quite variable and irregular. The basic but very important part in any network is route searching. We have many conventional route searching algorithms like link-state, and distance vector algorithms but they are restricted to the static point to point network topology. In this paper we propose a model that uses the Ant Colony Algorithm for route searching. It is dynamic in nature and has positive feedback mechanism that conforms to the route searching. We have also embedded the concept of Non-Deterministic Finite Automata [NDFA] minimization to reduce the network to increase the performance. Results show that Ant Colony Algorithm gives the shortest path from the source to destination node and NDFA minimization reduces the broadcasting storm effectively.Keywords: routing, ant colony algorithm, NDFA, IoT
Procedia PDF Downloads 4444555 Wally Feelings Test: Validity and Reliability Study
Authors: Gökhan Kayili, Ramazan Ari
Abstract:
In this research, it is aimed to be adapted Wally Feelings Test to Turkish children and performed the reliability and validity analysis of the test. The sampling of the research was composed of three to five year-old 699 Turkish preschoolers who are attending official and private nursery school. The schools selected with simple random sampling method by considering different socio economic conditions and different central district in Konya. In order to determine reliability of Wally Feelings Test, internal consistency coefficients (KR-20), split-half reliability and test- retest reliability analysis have been performed. During validation process construct validity, content/scope validity and concurrent/criterion validity were used. When validity and reliability of the test examined, it is seen that Wally Feelings Test is a valid and reliable instrument to evaluate three to five year old Turkish children’s understanding feeling skills.Keywords: reliability, validity, wally feelings test, social sciences
Procedia PDF Downloads 5404554 Preparedness for Microbial Forensics Evidence Collection on Best Practice
Authors: Victor Ananth Paramananth, Rashid Muniginin, Mahaya Abd Rahman, Siti Afifah Ismail
Abstract:
Safety issues, scene protection, and appropriate evidence collection must be handled in any bio crime scene. There will be a scene or multi-scene to be cordoned for investigation in any bio-incident or bio crime event. Evidence collection is critical in determining the type of microbial or toxin, its lethality, and its source. As a consequence, from the start of the investigation, a proper sampling method is required. The most significant challenges for the crime scene officer would be deciding where to obtain samples, the best sampling method, and the sample sizes needed. Since there could be evidence in liquid, viscous, or powder shape at a crime scene, crime scene officers have difficulty determining which tools to use for sampling. To maximize sample collection, the appropriate tools for sampling methods are necessary. This study aims to assist the crime scene officer in collecting liquid, viscous, and powder biological samples in sufficient quantity while preserving sample quality. Observational tests on sample collection using liquid, viscous, and powder samples for adequate quantity and sample quality were performed using UV light in this research. The density of the light emission varies upon the method of collection and sample types. The best tools for collecting sufficient amounts of liquid, viscous, and powdered samples can be identified by observing UV light. Instead of active microorganisms, the invisible powder is used to assess sufficient sample collection during a crime scene investigation using various collection tools. The liquid, powdered and viscous samples collected using different tools were analyzed using Fourier transform infrared - attenuate total reflection (FTIR-ATR). FTIR spectroscopy is commonly used for rapid discrimination, classification, and identification of intact microbial cells. The liquid, viscous and powdered samples collected using various tools have been successfully observed using UV light. Furthermore, FTIR-ATR analysis showed that collected samples are sufficient in quantity while preserving their quality.Keywords: biological sample, crime scene, collection tool, UV light, forensic
Procedia PDF Downloads 1964553 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 2414552 Unravelling the Impact of Job Resources: Alleviating Job-Related Anxiety to Forster Employee Creativity Within the Oil and Gas Industry
Authors: Nana Kojo Ayimadu Baafi, Kwesi Amponsah-Tawiah
Abstract:
The study investigated the relationship between job-related anxiety and employee creativity. The study further explored the role of job resources in moderating the relationship between job-related anxiety and employee creativity within the oil and gas industries. The study utilized a cross-sectional survey design. A non-probability sampling technique, specifically convenience sampling, was used to sample 1200 participants from multiple companies within the oil and gas industries. The collected data were analyzed using Regression analysis and PROCESS macro for the moderation analysis. The study empirically demonstrated a negative significant relationship between job-related anxiety and employee creativity. It also exhibited that job resources moderated the relationship between job-related anxiety and creativity. This study addresses gaps in previous studies by highlighting the significance of job resources in how job-related anxiety affects employee creativity.Keywords: employee creativity, job-related anxiety, job resource, human resources
Procedia PDF Downloads 504551 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory
Authors: Damir Latypov
Abstract:
A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory
Procedia PDF Downloads 1564550 Comparison between Continuous Genetic Algorithms and Particle Swarm Optimization for Distribution Network Reconfiguration
Authors: Linh Nguyen Tung, Anh Truong Viet, Nghien Nguyen Ba, Chuong Trinh Trong
Abstract:
This paper proposes a reconfiguration methodology based on a continuous genetic algorithm (CGA) and particle swarm optimization (PSO) for minimizing active power loss and minimizing voltage deviation. Both algorithms are adapted using graph theory to generate feasible individuals, and the modified crossover is used for continuous variable of CGA. To demonstrate the performance and effectiveness of the proposed methods, a comparative analysis of CGA with PSO for network reconfiguration, on 33-node and 119-bus radial distribution system is presented. The simulation results have shown that both CGA and PSO can be used in the distribution network reconfiguration and CGA outperformed PSO with significant success rate in finding optimal distribution network configuration.Keywords: distribution network reconfiguration, particle swarm optimization, continuous genetic algorithm, power loss reduction, voltage deviation
Procedia PDF Downloads 1904549 Low Cost Surface Electromyographic Signal Amplifier Based on Arduino Microcontroller
Authors: Igor Luiz Bernardes de Moura, Luan Carlos de Sena Monteiro Ozelim, Fabiano Araujo Soares
Abstract:
The development of a low cost acquisition system of S-EMG signals which are reliable, comfortable for the user and with high mobility shows to be a relevant proposition in modern biomedical engineering scenario. In the study, the sampling capacity of the Arduino microcontroller Atmel Atmega328 with an A/D converter with 10-bit resolution and its reconstructing capability of a signal of surface electromyography are analyzed. An electronic circuit to capture the signal through two differential channels was designed, signals from Biceps Brachialis of a healthy man of 21 years was acquired to test the system prototype. ARV, MDF, MNF and RMS estimators were used to compare de acquired signals with physiological values. The Arduino was configured with a sampling frequency of 1.5 kHz for each channel, and the tests with the circuit designed offered a SNR of 20.57dB.Keywords: electromyography, Arduino, low-cost, atmel atmega328 microcontroller
Procedia PDF Downloads 3684548 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran
Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard
Abstract:
Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.Keywords: data mining, ischemic stroke, decision tree, Bayesian network
Procedia PDF Downloads 1764547 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography
Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya
Abstract:
In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography
Procedia PDF Downloads 2914546 Participation, Network, Women’s Competency, and Government Policy Affecting on Community Development
Authors: Nopsarun Vannasirikul
Abstract:
The purposes of this research paper were to study the current situations of community development, women’s potentials, women’s participation, network, and government policy as well as to study the factors influencing women’s potentials, women’s participation, network, and government policy that have on the community development. The population included the women age of 18 years old who were living in the communities of Bangkok areas. This study was a mix research method of quantitative and qualitative method. A simple random sampling method was utilized to obtain 400 sample groups from 50 districts of Bangkok and to perform data collection by using questionnaire. Also, a purposive sampling method was utilized to obtain 12 informants for an in-depth interview to gain an in-sight information for quantitative method.Keywords: community development, participation, network, women’s right, management
Procedia PDF Downloads 1744545 Heart Failure Identification and Progression by Classifying Cardiac Patients
Authors: Muhammad Saqlain, Nazar Abbas Saqib, Muazzam A. Khan
Abstract:
Heart Failure (HF) has become the major health problem in our society. The prevalence of HF has increased as the patient’s ages and it is the major cause of the high mortality rate in adults. A successful identification and progression of HF can be helpful to reduce the individual and social burden from this syndrome. In this study, we use a real data set of cardiac patients to propose a classification model for the identification and progression of HF. The data set has divided into three age groups, namely young, adult, and old and then each age group have further classified into four classes according to patient’s current physical condition. Contemporary Data Mining classification algorithms have been applied to each individual class of every age group to identify the HF. Decision Tree (DT) gives the highest accuracy of 90% and outperform all other algorithms. Our model accurately diagnoses different stages of HF for each age group and it can be very useful for the early prediction of HF.Keywords: decision tree, heart failure, data mining, classification model
Procedia PDF Downloads 4024544 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique
Authors: Ghada A. Alfattni
Abstract:
Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates.Keywords: imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour
Procedia PDF Downloads 3524543 Using LTE-Sim in New Hanover Decision Algorithm for 2-Tier Macrocell-Femtocell LTE Network
Authors: Umar D. M., Aminu A. M., Izaddeen K. Y.
Abstract:
Deployments of mini macrocell base stations also referred to as femtocells, improve the quality of service of indoor and outdoor users. Nevertheless, mobility management remains a key issue with regards to their deployment. This paper is leaned towards this issue, with an in-depth focus on the most important aspect of mobility management -handover. In handover management, making a handover decision in the LTE two-tier macrocell femtocell network is a crucial research area. Decision algorithms in this research are classified and comparatively analyzed according to received signal strength, user equipment speed, cost function, and interference. However, it was observed that most of the discussed decision algorithms fail to consider cell selection with hybrid access policy in a single macrocell multiple femtocell scenario, another observation was a majority of these algorithms lack the incorporation of user equipment residence parameter. Not including this parameter boosts the number of unnecessary handover occurrence. To deal with these issues, a sophisticated handover decision algorithm is proposed. The proposed algorithm considers the user’s velocity, received signal strength, residence time, as well as the femtocell base station’s access policy. Simulation results have shown that the proposed algorithm reduces the number of unnecessary handovers when compared to conventional received signal strength-based handover decision algorithm.Keywords: user-equipment, radio signal service, long term evolution, mobility management, handoff
Procedia PDF Downloads 1254542 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm
Authors: Xiang Jianhong, Wang Cong, Wang Linyu
Abstract:
With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.Keywords: telemedicine, fetal ECG, compressed sensing, joint sparse reconstruction, block sparse signal
Procedia PDF Downloads 1294541 The Impact of Life Satisfaction on Substance Abuse: Delinquency as a Mediator
Authors: Mahadzirah Mohamad, Morliyati Mohammad, Nor Azman Mat Ali, Zainudin Awang
Abstract:
Globally, youth substance abuse has been identified as the problem that causes substantial damage not only to individuals, but also to families and communities. In addition, substance abuse youths have become unproductive resources that would play lesser roles in the nation’s development. The increasing trend of substance abuse among youths has raised a lot of concern among various quarters in Malaysia. It has also been reported that Malay youths are the majority group involved in substance abuse. However, it was noted that life satisfaction had been found to be an important mitigating factor that addressed substance abuse. The objectives of the study were twofold: firstly, to ascertain the effect of life satisfaction on substance abuse among Malay youth. Secondly, to identify the role of delinquency on the relationship between life satisfaction and substance abuse. This study adopted a cross-sectional research design. Self-administered questionnaires were distributed to 500 Malay youths at the youth programmes using a two-step sampling technique: area sampling and systematic sampling. The research hypotheses were tested using Structural Equation Modelling. The findings of the study revealed that there is no significance relationship between life satisfaction and substance abuse. There is a significant inverse relationship between life satisfaction and delinquency. Moreover, delinquency has a positive significant influence on substance abuse. The use of Bootstrapping analysis proved that delinquency plays a full mediating role in the relationship between life satisfaction and substance abuse. This study suggested that life satisfaction has no effect on youth substance abuse. In order to reduce substance abuse, efforts should be undertaken to reduce delinquency behaviour by increasing youth life satisfaction.Keywords: delinquency, life satisfaction, substance abuse, youth
Procedia PDF Downloads 353