Search results for: RLS identification algorithm
4593 The Influence of Superordinate Identity and Group Size on Group Decision Making through Discussion
Authors: Lin Peng, Jin Zhang, Yuanyuan Miao, Quanquan Zheng
Abstract:
Group discussion and group decision-making have long been a topic of research interest. Traditional research on group decision making typically focuses on the strategies or functional models of combining members’ preferences to reach an optimal consensus. In this research, we want to explore natural process group decision making through discussion and examine relevant, influential factors--common superordinate identity shared by group and size of the groups. We manipulated the social identity of the groups into either a shared superordinate identity or different subgroup identities. We also manipulated the size to make it either a big (6-8 person) group or small group (3-person group). Using experimental methods, we found members of a superordinate identity group tend to modify more of their own opinions through the discussion, compared to those only identifying with their subgroups. Besides, members of superordinate identity groups also formed stronger identification with group decision--the results of group discussion than their subgroup peers. We also found higher member modification in bigger groups compared to smaller groups. Evaluations of decisions before and after discussion as well as group decisions are strongly linked to group identity, as members of superordinate group feel more confident and satisfied with both the results and decision-making process. Members’ opinions are more similar and homogeneous in smaller groups compared to bigger groups. This research have many implications for further research and applied behaviors in organizations.Keywords: group decision making, group size, identification, modification, superordinate identity
Procedia PDF Downloads 3074592 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 974591 Building Scalable and Accurate Hybrid Kernel Mapping Recommender
Authors: Hina Iqbal, Mustansar Ali Ghazanfar, Sandor Szedmak
Abstract:
Recommender systems uses artificial intelligence practices for filtering obscure information and can predict if a user likes a specified item. Kernel mapping Recommender systems have been proposed which are accurate and state-of-the-art algorithms and resolve recommender system’s design objectives such as; long tail, cold-start, and sparsity. The aim of research is to propose hybrid framework that can efficiently integrate different versions— namely item-based and user-based KMR— of KMR algorithm. We have proposed various heuristic algorithms that integrate different versions of KMR (into a unified framework) resulting in improved accuracy and elimination of problems associated with conventional recommender system. We have tested our system on publically available movies dataset and benchmark with KMR. The results (in terms of accuracy, precision, recall, F1 measure and ROC metrics) reveal that the proposed algorithm is quite accurate especially under cold-start and sparse scenarios.Keywords: Kernel Mapping Recommender Systems, hybrid recommender systems, cold start, sparsity, long tail
Procedia PDF Downloads 3404590 A Fast Calculation Approach for Position Identification in a Distance Space
Authors: Dawei Cai, Yuya Tokuda
Abstract:
The market of localization based service (LBS) is expanding. The acquisition of physical location is the fundamental basis for LBS. GPS, the de facto standard for outdoor localization, does not work well in indoor environment due to the blocking of signals by walls and ceiling. To acquire high accurate localization in an indoor environment, many techniques have been developed. Triangulation approach is often used for identifying the location, but a heavy and complex computation is necessary to calculate the location of the distances between the object and several source points. This computation is also time and power consumption, and not favorable to a mobile device that needs a long action life with battery. To provide a low power consumption approach for a mobile device, this paper presents a fast calculation approach to identify the location of the object without online solving solutions to simultaneous quadratic equations. In our approach, we divide the location identification into two parts, one is offline, and other is online. In offline mode, we make a mapping process that maps the location area to distance space and find a simple formula that can be used to identify the location of the object online with very light computation. The characteristic of the approach is a good tradeoff between the accuracy and computational amount. Therefore, this approach can be used in smartphone and other mobile devices that need a long work time. To show the performance, some simulation experimental results are provided also in the paper.Keywords: indoor localization, location based service, triangulation, fast calculation, mobile device
Procedia PDF Downloads 1744589 Modified Gold Screen Printed Electrode with Ruthenium Complex for Selective Detection of Porcine DNA
Authors: Siti Aishah Hasbullah
Abstract:
Studies on identification of pork content in food have grown rapidly to meet the Halal food standard in Malaysia. The used mitochondria DNA (mtDNA) approaches for the identification of pig species is thought to be the most precise marker due to the mtDNA genes are present in thousands of copies per cell, the large variability of mtDNA. The standard method commonly used for DNA detection is based on polymerase chain reaction (PCR) method combined with gel electrophoresis but has major drawback. Its major drawbacks are laborious, need longer time and toxic to handle. Therefore, the need for simplicity and fast assay of DNA is vital and has triggered us to develop DNA biosensors for porcine DNA detection. Therefore, the aim of this project is to develop electrochemical DNA biosensor based on ruthenium (II) complex, [Ru(bpy)2(p-PIP)]2+ as DNA hybridization label. The interaction of DNA and [Ru(bpy)2(p-HPIP)]2+ will be studied by electrochemical transduction using Gold Screen-Printed Electrode (GSPE) modified with gold nanoparticles (AuNPs) and succinimide acrylic microspheres. The electrochemical detection by redox active ruthenium (II) complex was measured by cyclic voltammetry (CV) and differential pulse voltammetry (DPV). The results indicate that the interaction of [Ru(bpy)2(PIP)]2+ with hybridization complementary DNA has higher response compared to single-stranded and mismatch complementary DNA. Under optimized condition, this porcine DNA biosensor incorporated modified GSPE shows good linear range towards porcine DNA.Keywords: gold, screen printed electrode, ruthenium, porcine DNA
Procedia PDF Downloads 3094588 Magnetic Resonance Imaging in Children with Brain Tumors
Authors: J. R. Ashrapov, G. A. Alihodzhaeva, D. E. Abdullaev, N. R. Kadirbekov
Abstract:
Diagnosis of brain tumors is one of the challenges, as several central nervous system diseases run the same symptoms. Modern diagnostic techniques such as CT, MRI helps to significantly improve the surgery in the operating period, after surgery, after allowing time to identify postoperative complications in neurosurgery. Purpose: To study the MRI characteristics and localization of brain tumors in children and to detect the postoperative complications in the postoperative period. Materials and methods: A retrospective study of treatment of 62 children with brain tumors in age from 2 to 5 years was performed. Results of the review: MRI scan of the brain of the 62 patients 52 (83.8%) case revealed a brain tumor. Distribution on MRI of brain tumors found in 15 (24.1%) - glioblastomas, 21 (33.8%) - astrocytomas, 7 (11.2%) - medulloblastomas, 9 (14.5%) - a tumor origin (craniopharyngiomas, chordoma of the skull base). MRI revealed the following characteristic features: an additional sign of the heterogeneous MRI signal of hyper and hypointensive T1 and T2 modes with a different perifocal swelling degree with involvement in the process of brain vessels. The main objectives of postoperative MRI study are the identification of early or late postoperative complications, evaluation of radical surgery, the identification of the extended-growing tumor that (in terms of 3-4 weeks). MRI performed in the following cases: 1. Suspicion of a hematoma (3 days or more) 2. Suspicion continued tumor growth (in terms of 3-4 weeks). Conclusions: Magnetic resonance tomography is a highly informative method of diagnostics of brain tumors in children. MRI also helps to determine the effectiveness and tactics of treatment and the follow up in the postoperative period.Keywords: brain tumors, children, MRI, treatment
Procedia PDF Downloads 1454587 Trusting the Eyes: The Changing Landscape of Eyewitness Testimony
Authors: Manveen Singh
Abstract:
Since the very advent of law enforcement, eyewitness testimony has played a pivotal role in identifying, arresting and convicting suspects. Reliant heavily on the accuracy of human memory, nothing seems to carry more weight with the judiciary than the testimony of an actual witness. The acceptance of eyewitness testimony as a substantive piece of evidence lies embedded in the assumption that the human mind is adept at recording and storing events. Research though, has proven otherwise. Having carried out extensive study in the field of eyewitness testimony for the past 40 years, psychologists have concluded that human memory is fragile and needs to be treated carefully. The question that arises then, is how reliable is eyewitness testimony? The credibility of eyewitness testimony, simply put, depends on several factors leaving it reliable at times while not so much at others. This is further substantiated by the fact that as per scientific research, over 75 percent of all eyewitness testimonies may stand in error; quite a few of these cases resulting in life sentences. Although the advancement of scientific techniques, especially DNA testing, helped overturn many of these eyewitness testimony-based convictions, yet eyewitness identifications continue to form the backbone of most police investigations and courtroom decisions till date. What then is the solution to this long standing concern regarding the accuracy of eyewitness accounts? The present paper shall analyze the linkage between human memory and eyewitness identification as well as look at the various factors governing the credibility of eyewitness testimonies. Furthermore, it shall elaborate upon some best practices developed over the years to help reduce mistaken identifications. Thus, in the process, trace out the changing landscape of eyewitness testimony amidst the evolution of DNA and trace evidence.Keywords: DNA, eyewitness, identification, testimony, evidence
Procedia PDF Downloads 3284586 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI
Procedia PDF Downloads 1734585 A Review on Applications of Evolutionary Algorithms to Reservoir Operation for Hydropower Production
Authors: Nkechi Neboh, Josiah Adeyemo, Abimbola Enitan, Oludayo Olugbara
Abstract:
Evolutionary algorithms are techniques extensively used in the planning and management of water resources and systems. It is useful in finding optimal solutions to water resources problems considering the complexities involved in the analysis. River basin management is an essential area that involves the management of upstream, river inflow and outflow including downstream aspects of a reservoir. Water as a scarce resource is needed by human and the environment for survival and its management involve a lot of complexities. Management of this scarce resource is necessary for proper distribution to competing users in a river basin. This presents a lot of complexities involving many constraints and conflicting objectives. Evolutionary algorithms are very useful in solving this kind of complex problems with ease. Evolutionary algorithms are easy to use, fast and robust with many other advantages. Many applications of evolutionary algorithms, which are population based search algorithm, are discussed. Different methodologies involved in the modeling and simulation of water management problems in river basins are explained. It was found from this work that different evolutionary algorithms are suitable for different problems. Therefore, appropriate algorithms are suggested for different methodologies and applications based on results of previous studies reviewed. It is concluded that evolutionary algorithms, with wide applications in water resources management, are viable and easy algorithms for most of the applications. The results suggested that evolutionary algorithms, applied in the right application areas, can suggest superior solutions for river basin management especially in reservoir operations, irrigation planning and management, stream flow forecasting and real-time applications. The future directions in this work are suggested. This study will assist decision makers and stakeholders on the best evolutionary algorithm to use in varied optimization issues in water resources management.Keywords: evolutionary algorithm, multi-objective, reservoir operation, river basin management
Procedia PDF Downloads 4914584 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network
Authors: Jia Xin Low, Keng Wah Choo
Abstract:
This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification
Procedia PDF Downloads 3494583 Genetic Diversity Analysis in Embelia Ribes by RAPD Markers
Authors: Sabitha Rani A., Nagamani V.
Abstract:
Embelia ribes Burm.f (Family-Myrsinaceae) commonly known as Vidanga or Baibirang, is one of the important medicinal plants of India. The seed extract is reported to be antidiabetic, antitumour, analgesic, anti-inflammatory, antispermatogenic, free radical scavenging activities and widely used in more than 75 Ayurvedic commercial formulations. Among the 100 different species of Embelia, E. ribes is considered as a major source of Embelin, a bioactive compound. Because of high demand and low availability, the seeds of E. ribes are substituted with many cheaper alternatives. Therefore, the present study of RAPD-PCR analysis was undertaken to develop molecular markers for identification of E. ribes. A total of 13 different seed samples of Embelia were collected from different agro-climatic regions of India. The seeds of E.ribes were collected from Kalpetta, Kerala and three different seed samples were collected from traders of Odisha, Madhya Pradesh, Maharastra. The other nine seed samples were collected from local traders which they have collected from different regions of India. Genomic DNA was isolated from different seed samples E. ribes and RAPD-PCR was performed on 13 different seed samples using 47 random primers. Out of all the primers, only 22 primers produced clear and highly-reproducible banding patterns. The 22 selected RAPD primers generated a total of 280 alleles with an average of 12 alleles per primer pair. In the present study, we have identified three RAPD-PCR markers i.e. OPF5_480 bp, OPH11_520 bp and OPH4_530 bp which can be used for genetic fingerprinting of E. ribes. This methodology can be employed for identification of original E. ribes and also distinguishing it from other substitutes and adulterants.Keywords: Embelia ribes, RAPD-PCR, primers, genetic analysis
Procedia PDF Downloads 2984582 Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring
Authors: Dongyeob Han, Jungwon Huh, Quang Huy Tran, Choonghyun Kang
Abstract:
Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility.Keywords: building, image matching, temperature, unmanned aerial vehicle
Procedia PDF Downloads 2924581 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm
Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz
Abstract:
Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations
Procedia PDF Downloads 1354580 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm
Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali
Abstract:
Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir
Procedia PDF Downloads 2654579 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks
Procedia PDF Downloads 1444578 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 3164577 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 4044576 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 1144575 MIM and Experimental Studies of the Thermal Drift in an Ultra-High Precision Instrument for Dimensional Metrology
Authors: Kamélia Bouderbala, Hichem Nouira, Etienne Videcoq, Manuel Girault, Daniel Petit
Abstract:
Thermal drifts caused by the power dissipated by the mechanical guiding systems constitute the main limit to enhance the accuracy of an ultra-high precision cylindricity measuring machine. For this reason, a high precision compact prototype has been designed to simulate the behaviour of the instrument. It ensures in situ calibration of four capacitive displacement probes by comparison with four laser interferometers. The set-up includes three heating wires for simulating the powers dissipated by the mechanical guiding systems, four additional heating wires located between each laser interferometer head and its respective holder, 19 Platinum resistance thermometers (Pt100) to observe the temperature evolution inside the set-up and four Pt100 sensors to monitor the ambient temperature. Both a Reduced Model (RM), based on the Modal Identification Method (MIM) was developed and optimized by comparison with the experimental results. Thereafter, time dependent tests were performed under several conditions to measure the temperature variation at 19 fixed positions in the system and compared to the calculated RM results. The RM results show good agreement with experiment and reproduce as well the temperature variations, revealing the importance of the RM proposed for the evaluation of the thermal behaviour of the system.Keywords: modal identification method (MIM), thermal behavior and drift, dimensional metrology, measurement
Procedia PDF Downloads 3964574 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 4944573 Identifying Dynamic Structural Parameters of Soil-Structure System Based on Data Recorded during Strong Earthquakes
Authors: Vahidreza Mahmoudabadi, Omid Bahar, Mohammad Kazem Jafari
Abstract:
In many applied engineering problems, structural analysis is usually conducted by assuming a rigid bed, while imposing the effect of structure bed flexibility can affect significantly on the structure response. This article focuses on investigation and evaluation of the effects arising from considering a soil-structure system in evaluation of dynamic characteristics of a steel structure with respect to elastic and inelastic behaviors. The recorded structure acceleration during Taiwan’s strong Chi-Chi earthquake on different floors of the structure was our evaluation criteria. The respective structure is an eight-story steel bending frame structure designed using a displacement-based direct method assuring weak beam - strong column function. The results indicated that different identification methods i.e. reverse Fourier transform or transfer functions, is capable to determine some of the dynamic parameters of the structure precisely, rather than evaluating all of them at once (mode frequencies, mode shapes, structure damping, structure rigidity, etc.). Response evaluation based on the input and output data elucidated that the structure first mode is not significantly affected, even considering the soil-structure interaction effect, but the upper modes have been changed. Also, it was found that the response transfer function of the different stories, in which plastic hinges have occurred in the structure components, provides similar results.Keywords: bending steel frame structure, dynamic characteristics, displacement-based design, soil-structure system, system identification
Procedia PDF Downloads 5034572 Risk Management in Industrial Supervision Projects
Authors: Érick Aragão Ribeiro, George André Pereira Thé, José Marques Soares
Abstract:
Several problems in industrial supervision software development projects may lead to the delay or cancellation of projects. These problems can be avoided or contained by using identification methods, analysis and control of risks. These procedures can give an overview of the possible problems that can happen in the projects and what are the immediate solutions. Therefore, we propose a risk management method applied to the teaching and development of industrial supervision software. The method is developed through a literature review and previous projects can be divided into phases of management and have basic features that are validated with experimental research carried out by mechatronics engineering students and professionals. The management is conducted through the stages of identification, analysis, planning, monitoring, control and communication of risks. Programmers use a method of prioritizing risks considering the gravity and the possibility of occurrence of the risk. The outputs of the method indicate which risks occurred or are about to happen. The first results indicate which risks occur at different stages of the project and what risks have a high probability of occurring. The results show the efficiency of the proposed method compared to other methods, showing the improvement of software quality and leading developers in their decisions. This new way of developing supervision software helps students identify design problems, evaluate software developed and propose effective solutions. We conclude that the risk management optimizes the development of the industrial process control software and provides higher quality to the product.Keywords: supervision software, risk management, industrial supervision, project management
Procedia PDF Downloads 3564571 A Spiral Dynamic Optimised Hybrid Fuzzy Logic Controller for a Unicycle Mobile Robot on Irregular Terrains
Authors: Abdullah M. Almeshal, Mohammad R. Alenezi, Talal H. Alzanki
Abstract:
This paper presents a hybrid fuzzy logic control strategy for a unicycle trajectory following robot on irregular terrains. In literature, researchers have presented the design of path tracking controllers of mobile robots on non-frictional surface. In this work, the robot is simulated to drive on irregular terrains with contrasting frictional profiles of peat and rough gravel. A hybrid fuzzy logic controller is utilised to stabilise and drive the robot precisely with the predefined trajectory and overcome the frictional impact. The controller gains and scaling factors were optimised using spiral dynamics optimisation algorithm to minimise the mean square error of the linear and angular velocities of the unicycle robot. The robot was simulated on various frictional surfaces and terrains and the controller was able to stabilise the robot with a superior performance that is shown via simulation results.Keywords: fuzzy logic control, mobile robot, trajectory tracking, spiral dynamic algorithm
Procedia PDF Downloads 4954570 The Development of the Psychosomatic Nursing Model from an Evidence-Based Action Research on Proactive Mental Health Care for Medical Inpatients
Authors: Chia-Yi Wu, Jung-Chen Chang, Wen-Yu Hu, Ming-Been Lee
Abstract:
In nearly all physical health conditions, suicide risk is increased compared to healthy people even after adjustment for age, gender, mental health, and substance use diagnoses. In order to highlight the importance of suicide risk assessment for the inpatients and early identification and engagement for inpatients’ mental health problems, a study was designed aiming at developing a comprehensive psychosomatic nursing engagement (PSNE) model with standardized operation procedures informing how nurses communicate, assess, and engage with the inpatients with emotional distress. The purpose of the study was to promote the gatekeeping role of clinical nurses in performing brief assessment and interventions to detect depression and anxiety symptoms among the inpatients, particularly in non-psychiatric wards. The study will be carried out in a 2000-bed university hospital in Northern Taiwan in 2019. We will select a ward for trial and develop feasible procedures and in-job training course for the nurses to offer mental health care, which will also be validated through professional consensus meeting. The significance of the study includes the following three points: (1) The study targets at an important but less-researched area of PSNE model in the cultural background of Taiwan, where hospital service is highly accessible, but mental health and suicide risk assessment are hardly provided by non-psychiatric healthcare personnel. (2) The issue of PSNE could be efficient and cost-effective in the identification of suicide risks at an early stage to prevent inpatient suicide or to reduce future suicide risk by early treatment of mental illnesses among the high-risk group of hospitalized patients who are more than three-times lethal to suicide. (3) Utilizing a brief tool with its established APP ('The Five-item Brief Symptom Rating Scale, BSRS-5'), we will invent the standardized procedure of PSNE and referral steps in collaboration with the medical teams across the study hospital. New technological tools nested within nursing assessment/intervention will concurrently be invented to facilitate better care quality. The major outcome measurements will include tools for early identification of common mental distress and suicide risks, i.e., the BSRS-5, revised BSRS-5, and the 9-item Concise Mental Health Checklist (CMHC-9). The main purpose of using the CMHC-9 in clinical suicide risk assessment is mainly to provide care and build-up therapeutic relationship with the client, so it will also be used to nursing training highlighting the skills of supportive care. Through early identification of the inpatients’ depressive symptoms or other mental health care needs such as insomnia, anxiety, or suicide risk, the majority of the nursing clinicians would be able to engage in critical interventions that alleviate the inpatients’ suffering from mental health problems, given a feasible nursing input.Keywords: mental health care, clinical outcome improvement, clinical nurses, suicide prevention, psychosomatic nursing
Procedia PDF Downloads 1094569 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform
Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba
Abstract:
Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.Keywords: high level synthesis, canny edge detection, hardware accelerators, computer vision
Procedia PDF Downloads 4784568 Resistivity Tomography Optimization Based on Parallel Electrode Linear Back Projection Algorithm
Authors: Yiwei Huang, Chunyu Zhao, Jingjing Ding
Abstract:
Electrical Resistivity Tomography has been widely used in the medicine and the geology, such as the imaging of the lung impedance and the analysis of the soil impedance, etc. Linear Back Projection is the core algorithm of Electrical Resistivity Tomography, but the traditional Linear Back Projection can not make full use of the information of the electric field. In this paper, an imaging method of Parallel Electrode Linear Back Projection for Electrical Resistivity Tomography is proposed, which generates the electric field distribution that is not linearly related to the traditional Linear Back Projection, captures the new information and improves the imaging accuracy without increasing the number of electrodes by changing the connection mode of the electrodes. The simulation results show that the accuracy of the image obtained by the inverse operation obtained by the Parallel Electrode Linear Back Projection can be improved by about 20%.Keywords: electrical resistivity tomography, finite element simulation, image optimization, parallel electrode linear back projection
Procedia PDF Downloads 1534567 Facility Anomaly Detection with Gaussian Mixture Model
Authors: Sunghoon Park, Hank Kim, Jinwon An, Sungzoon Cho
Abstract:
Internet of Things allows one to collect data from facilities which are then used to monitor them and even predict malfunctions in advance. Conventional quality control methods focus on setting a normal range on a sensor value defined between a lower control limit and an upper control limit, and declaring as an anomaly anything falling outside it. However, interactions among sensor values are ignored, thus leading to suboptimal performance. We propose a multivariate approach which takes into account many sensor values at the same time. In particular Gaussian Mixture Model is used which is trained to maximize likelihood value using Expectation-Maximization algorithm. The number of Gaussian component distributions is determined by Bayesian Information Criterion. The negative Log likelihood value is used as an anomaly score. The actual usage scenario goes like a following. For each instance of sensor values from a facility, an anomaly score is computed. If it is larger than a threshold, an alarm will go off and a human expert intervenes and checks the system. A real world data from Building energy system was used to test the model.Keywords: facility anomaly detection, gaussian mixture model, anomaly score, expectation maximization algorithm
Procedia PDF Downloads 2724566 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates
Authors: Abdelaziz Fellah, Allaoua Maamir
Abstract:
We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery
Procedia PDF Downloads 3874565 Implicit and Explicit Mechanisms of Emotional Contagion
Authors: Andres Pinilla Palacios, Ricardo Tamayo
Abstract:
Emotional contagion is characterized as an automatic tendency to synchronize behaviors that facilitate emotional convergence among humans. It might thus play a pivotal role to understand the dynamics of key social interactions. However, a few research has investigated its potential mechanisms. We suggest two complementary but independent processes that may underlie emotional contagion. The efficient contagion hypothesis, based on fast and implicit bottom-up processes, modulated by familiarity and spread of activation in the emotional associative networks of memory. Secondly, the emotional contrast hypothesis, based on slow and explicit top-down processes guided by deliberated appraisal and hypothesis-testing. In order to assess these two hypotheses, an experiment with 39 participants was conducted. In the first phase, participants were induced (between-groups) to an emotional state (positive, neutral or negative) using a standardized video taken from the FilmStim database. In the second phase, participants classified and rated (within-subject) the emotional state of 15 faces (5 for each emotional state) taken from the POFA database. In the third phase, all participants were returned to a baseline emotional state using the same neutral video used in the first phase. In a fourth phase, participants classified and rated a new set of 15 faces. The accuracy in the identification and rating of emotions was partially explained by the efficient contagion hypothesis, but the speed with which these judgments were made was partially explained by the emotional contrast hypothesis. However, results are ambiguous, so a follow-up experiment is proposed in which emotional expressions and activation of the sympathetic system will be measured using EMG and EDA respectively.Keywords: electromyography, emotional contagion, emotional valence, identification of emotions, imitation
Procedia PDF Downloads 3164564 Water Body Detection and Estimation from Landsat Satellite Images Using Deep Learning
Authors: M. Devaki, K. B. Jayanthi
Abstract:
The identification of water bodies from satellite images has recently received a great deal of attention. Different methods have been developed to distinguish water bodies from various satellite images that vary in terms of time and space. Urban water identification issues body manifests in numerous applications with a great deal of certainty. There has been a sharp rise in the usage of satellite images to map natural resources, including urban water bodies and forests, during the past several years. This is because water and forest resources depend on each other so heavily that ongoing monitoring of both is essential to their sustainable management. The relevant elements from satellite pictures have been chosen using a variety of techniques, including machine learning. Then, a convolution neural network (CNN) architecture is created that can identify a superpixel as either one of two classes, one that includes water or doesn't from input data in a complex metropolitan scene. The deep learning technique, CNN, has advanced tremendously in a variety of visual-related tasks. CNN can improve classification performance by reducing the spectral-spatial regularities of the input data and extracting deep features hierarchically from raw pictures. Calculate the water body using the satellite image's resolution. Experimental results demonstrate that the suggested method outperformed conventional approaches in terms of water extraction accuracy from remote-sensing images, with an average overall accuracy of 97%.Keywords: water body, Deep learning, satellite images, convolution neural network
Procedia PDF Downloads 89