Search results for: signal classification
3183 Cyclostationary Gaussian Linearization for Analyzing Nonlinear System Response Under Sinusoidal Signal and White Noise Excitation
Authors: R. J. Chang
Abstract:
A cyclostationary Gaussian linearization method is formulated for investigating the time average response of nonlinear system under sinusoidal signal and white noise excitation. The quantitative measure of cyclostationary mean, variance, spectrum of mean amplitude, and mean power spectral density of noise is analyzed. The qualitative response behavior of stochastic jump and bifurcation are investigated. The validity of the present approach in predicting the quantitative and qualitative statistical responses is supported by utilizing Monte Carlo simulations. The present analysis without imposing restrictive analytical conditions can be directly derived by solving non-linear algebraic equations. The analytical solution gives reliable quantitative and qualitative prediction of mean and noise response for the Duffing system subjected to both sinusoidal signal and white noise excitation.Keywords: cyclostationary, duffing system, Gaussian linearization, sinusoidal, white noise
Procedia PDF Downloads 4893182 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 4943181 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 5173180 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 1003179 Dual-Channel Reliable Breast Ultrasound Image Classification Based on Explainable Attribution and Uncertainty Quantification
Authors: Haonan Hu, Shuge Lei, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Jijun Tang
Abstract:
This paper focuses on the classification task of breast ultrasound images and conducts research on the reliability measurement of classification results. A dual-channel evaluation framework was developed based on the proposed inference reliability and predictive reliability scores. For the inference reliability evaluation, human-aligned and doctor-agreed inference rationals based on the improved feature attribution algorithm SP-RISA are gracefully applied. Uncertainty quantification is used to evaluate the predictive reliability via the test time enhancement. The effectiveness of this reliability evaluation framework has been verified on the breast ultrasound clinical dataset YBUS, and its robustness is verified on the public dataset BUSI. The expected calibration errors on both datasets are significantly lower than traditional evaluation methods, which proves the effectiveness of the proposed reliability measurement.Keywords: medical imaging, ultrasound imaging, XAI, uncertainty measurement, trustworthy AI
Procedia PDF Downloads 1013178 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis
Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin
Abstract:
Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis
Procedia PDF Downloads 2023177 LEDs Based Indoor Positioning by Distances Derivation from Lambertian Illumination Model
Authors: Yan-Ren Chen, Jenn-Kaie Lain
Abstract:
This paper proposes a novel indoor positioning algorithm based on visible light communications, implemented by light-emitting diode fixtures. In the proposed positioning algorithm, distances between light-emitting diode fixtures and mobile terminal are derived from the assumption of ideal Lambertian optic radiation model, and Trilateration positioning method is proceeded immediately to get the coordinates of mobile terminal. The proposed positioning algorithm directly obtains distance information from the optical signal modeling, and therefore, statistical distribution of received signal strength at different positions in interior space has no need to be pre-established. Numerically, simulation results have shown that the proposed indoor positioning algorithm can provide accurate location coordinates estimation.Keywords: indoor positioning, received signal strength, trilateration, visible light communications
Procedia PDF Downloads 4113176 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing
Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi
Abstract:
This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.Keywords: data compression, ultrasonic communication, guided waves, FEM analysis
Procedia PDF Downloads 1243175 Fault Prognostic and Prediction Based on the Importance Degree of Test Point
Authors: Junfeng Yan, Wenkui Hou
Abstract:
Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate
Procedia PDF Downloads 3773174 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1673173 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1593172 Surface Hole Defect Detection of Rolled Sheets Based on Pixel Classification Approach
Authors: Samira Taleb, Sakina Aoun, Slimane Ziani, Zoheir Mentouri, Adel Boudiaf
Abstract:
Rolling is a pressure treatment technique that modifies the shape of steel ingots or billets between rotating rollers. During this process, defects may form on the surface of the rolled sheets and are likely to affect the performance and quality of the finished product. In our study, we developed a method for detecting surface hole defects using a pixel classification approach. This work includes several steps. First, we performed image preprocessing to delimit areas with and without hole defects on the sheet image. Then, we developed the histograms of each area to generate the gray level membership intervals of the pixels that characterize each area. As we noticed an intersection between the characteristics of the gray level intervals of the images of the two areas, we finally performed a learning step based on a series of detection tests to refine the membership intervals of each area, and to choose the defect detection criterion in order to optimize the recognition of the surface hole.Keywords: classification, defect, surface, detection, hole
Procedia PDF Downloads 153171 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 2143170 Accuracy Analysis of the American Society of Anesthesiologists Classification Using ChatGPT
Authors: Jae Ni Jang, Young Uk Kim
Abstract:
Background: Chat Generative Pre-training Transformer-3 (ChatGPT; San Francisco, California, Open Artificial Intelligence) is an artificial intelligence chatbot based on a large language model designed to generate human-like text. As the usage of ChatGPT is increasing among less knowledgeable patients, medical students, and anesthesia and pain medicine residents or trainees, we aimed to evaluate the accuracy of ChatGPT-3 responses to questions about the American Society of Anesthesiologists (ASA) classification based on patients’ underlying diseases and assess the quality of the generated responses. Methods: A total of 47 questions were submitted to ChatGPT using textual prompts. The questions were designed for ChatGPT-3 to provide answers regarding ASA classification in response to common underlying diseases frequently observed in adult patients. In addition, we created 18 questions regarding the ASA classification for pediatric patients and pregnant women. The accuracy of ChatGPT’s responses was evaluated by cross-referencing with Miller’s Anesthesia, Morgan & Mikhail’s Clinical Anesthesiology, and the American Society of Anesthesiologists’ ASA Physical Status Classification System (2020). Results: Out of the 47 questions pertaining to adults, ChatGPT -3 provided correct answers for only 23, resulting in an accuracy rate of 48.9%. Furthermore, the responses provided by ChatGPT-3 regarding children and pregnant women were mostly inaccurate, as indicated by a 28% accuracy rate (5 out of 18). Conclusions: ChatGPT provided correct responses to questions relevant to the daily clinical routine of anesthesiologists in approximately half of the cases, while the remaining responses contained errors. Therefore, caution is advised when using ChatGPT to retrieve anesthesia-related information. Although ChatGPT may not yet be suitable for clinical settings, we anticipate significant improvements in ChatGPT and other large language models in the near future. Regular assessments of ChatGPT's ASA classification accuracy are essential due to the evolving nature of ChatGPT as an artificial intelligence entity. This is especially important because ChatGPT has a clinically unacceptable rate of error and hallucination, particularly in pediatric patients and pregnant women. The methodology established in this study may be used to continue evaluating ChatGPT.Keywords: American Society of Anesthesiologists, artificial intelligence, Chat Generative Pre-training Transformer-3, ChatGPT
Procedia PDF Downloads 473169 The Impact on the Composition of Survey Refusals΄ Demographic Profile When Implementing Different Classifications
Authors: Eva Tsouparopoulou, Maria Symeonaki
Abstract:
The internationally documented declining survey response rates of the last two decades are mainly attributed to refusals. In fieldwork, a refusal may be obtained not only from the respondent himself/herself, but from other sources on the respondent’s behalf, such as other household members, apartment building residents or administrator(s), and neighborhood residents. In this paper, we investigate how the composition of the demographic profile of survey refusals changes when different classifications are implemented and the classification issues arising from that. The analysis is based on the 2002-2018 European Social Survey (ESS) datasets for Belgium, Germany, and United Kingdom. For these three countries, the size of selected sample units coded as a type of refusal for all nine under investigation rounds was large enough to meet the purposes of the analysis. The results indicate the existence of four different possible classifications that can be implemented and the significance of choosing the one that strengthens the contrasts of the different types of respondents' demographic profiles. Since the foundation of social quantitative research lies in the triptych of definition, classification, and measurement, this study aims to identify the multiplicity of the definition of survey refusals as a methodological tool for the continually growing research on non-response.Keywords: non-response, refusals, European social survey, classification
Procedia PDF Downloads 853168 Space Vector Pulse Width Modulation Based Design and Simulation of a Three-Phase Voltage Source Converter Systems
Authors: Farhan Beg
Abstract:
A space vector based pulse width modulation control technique for the three-phase PWM converter is proposed in this paper. The proposed control scheme is based on a synchronous reference frame model. High performance and efficiency is obtained with regards to the DC bus voltage and the power factor considerations of the PWM rectifier thus leading to low losses. MATLAB/SIMULINK are used as a platform for the simulations and a SIMULINK model is presented in the paper. The results show that the proposed model demonstrates better performance and properties compared to the traditional SPWM method and the method improves the dynamic performance of the closed loop drastically. For the space vector based pulse width modulation, sine signal is the reference waveform and triangle waveform is the carrier waveform. When the value of sine signal is larger than triangle signal, the pulse will start producing to high; and then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will change by changing the value of the modulation index and frequency used in this system to produce more pulse width. When more pulse width is produced, the output voltage will have lower harmonics contents and the resolution will increase.Keywords: power factor, SVPWM, PWM rectifier, SPWM
Procedia PDF Downloads 3353167 Sleep Apnea Hypopnea Syndrom Diagnosis Using Advanced ANN Techniques
Authors: Sachin Singh, Thomas Penzel, Dinesh Nandan
Abstract:
Accurate identification of Sleep Apnea Hypopnea Syndrom Diagnosis is difficult problem for human expert because of variability among persons and unwanted noise. This paper proposes the diagonosis of Sleep Apnea Hypopnea Syndrome (SAHS) using airflow, ECG, Pulse and SaO2 signals. The features of each type of these signals are extracted using statistical methods and ANN learning methods. These extracted features are used to approximate the patient's Apnea Hypopnea Index(AHI) using sample signals in model. Advance signal processing is also applied to snore sound signal to locate snore event and SaO2 signal is used to support whether determined snore event is true or noise. Finally, Apnea Hypopnea Index (AHI) event is calculated as per true snore event detected. Experiment results shows that the sensitivity can reach up to 96% and specificity to 96% as AHI greater than equal to 5.Keywords: neural network, AHI, statistical methods, autoregressive models
Procedia PDF Downloads 1193166 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 3313165 BER Analysis of Energy Detection Spectrum Sensing in Cognitive Radio Using GNU Radio
Authors: B. Siva Kumar Reddy, B. Lakshmi
Abstract:
Cognitive Radio is a turning out technology that empowers viable usage of the spectrum. Energy Detector-based Sensing is the most broadly utilized spectrum sensing strategy. Besides, it is a lot of generic as receivers does not like any information on the primary user's signals, channel data, of even the sort of modulation. This paper puts forth the execution of energy detection sensing for AM (Amplitude Modulated) signal at 710 KHz, FM (Frequency Modulated) signal at 103.45 MHz (local station frequency), Wi-Fi signal at 2.4 GHz and WiMAX signals at 6 GHz. The OFDM/OFDMA based WiMAX physical layer with convolutional channel coding is actualized utilizing USRP N210 (Universal Software Radio Peripheral) and GNU Radio based Software Defined Radio (SDR). Test outcomes demonstrated the BER (Bit Error Rate) augmentation with channel noise and BER execution is dissected for different Eb/N0 (the energy per bit to noise power spectral density ratio) values.Keywords: BER, Cognitive Radio, GNU Radio, OFDM, SDR, WiMAX
Procedia PDF Downloads 5003164 Design and Implementation of a 94 GHz CMOS Double-Balanced Up-Conversion Mixer for 94 GHz Imaging Radar Sensors
Authors: Yo-Sheng Lin, Run-Chi Liu, Chien-Chu Ji, Chih-Chung Chen, Chien-Chin Wang
Abstract:
A W-band double-balanced mixer for direct up-conversion using standard 90 nm CMOS technology is reported. The mixer comprises an enhanced double-balanced Gilbert cell with PMOS negative resistance compensation for conversion gain (CG) enhancement and current injection for power consumption reduction and linearity improvement, a Marchand balun for converting the single LO input signal to differential signal, another Marchand balun for converting the differential RF output signal to single signal, and an output buffer amplifier for loading effect suppression, power consumption reduction and CG enhancement. The mixer consumes low power of 6.9 mW and achieves LO-port input reflection coefficient of -17.8~ -38.7 dB and RF-port input reflection coefficient of -16.8~ -27.9 dB for frequencies of 90~100 GHz. The mixer achieves maximum CG of 3.6 dB at 95 GHz, and CG of 2.1±1.5 dB for frequencies of 91.9~99.4 GHz. That is, the corresponding 3 dB CG bandwidth is 7.5 GHz. In addition, the mixer achieves LO-RF isolation of 36.8 dB at 94 GHz. To the authors’ knowledge, the CG, LO-RF isolation and power dissipation results are the best data ever reported for a 94 GHz CMOS/BiCMOS up-conversion mixer.Keywords: CMOS, W-band, up-conversion mixer, conversion gain, negative resistance compensation, output buffer amplifier
Procedia PDF Downloads 5313163 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 1143162 Detection of Cytotoxicity of Green Synthesized Silver, Gold, and Silver/Gold Bimetallic on Baby Hamster Kidney-21 Cells Using MTT Assay
Authors: Naila Sher, Mushtaq Ahmed, Nadia Mushtaq, Rahmat Ali Khan
Abstract:
In cancer therapy, nanoparticles (NPs) shall be applied possibly by inoculation in the veins of humans. This action will connect them with white (WBCs) and red blood cells (RBCs) in the bloodstream before they reach their main targeted cancer cells. However, possible effects of silver, gold, and silver/gold bimetallic NPs (Ag, Au, and Ag/Au BNPs) on baby hamster kidney-21 (BHK-21) cells were studied by MTT assay. Here, Ag, Au, and their Ag/Au BNPs (bimetallic nanoparticles) were synthesized by using Hippeastrum hybridum (HH) extract. These NPs were characterized by UV-visible spectroscopy, FT-IR, XRD, and EDX, and SEM analysis. XRD analysis conferring the crystal structure with an average size of 13.3, 10.72, and 8.34nm of Ag, Au, and Ag/Au BNPs, respectively. SEM showed that Ag, Au, and Ag/Au BNPs had irregular morphologies, with nano measures calculated sizes of 40, 30, and 20 nm respectively. EDX spectrometers confirmed the presence of elemental Ag signal of the AgNPs with 22.75%, Au signal of the AuNPs with 48.08%, Ag signal with 12%, and Au signal with 38.26% of the Ag/Au BNPs. The BHK-21cells were incubated in the existence of doxorubicin, plant extract, Ag, Au, and Ag/Au BNPs. The cytotoxic effects could be observed in a dose-dependent mode; doxorubicin and Ag/Au BNPs were more toxic than plant extract, Ag, and Au NPs. It is demonstrated that NPs interact with BHK-21cells and significantly reduce cell viability in a concentration-dependent manner. However, to reduce the potential threats of NPs further studies are recommended.Keywords: hippeastrum hybridum, nanoparticle, BHK-21cells
Procedia PDF Downloads 1333161 Population Dynamics and Land Use/Land Cover Change on the Chilalo-Galama Mountain Range, Ethiopia
Authors: Yusuf Jundi Sado
Abstract:
Changes in land use are mostly credited to human actions that result in negative impacts on biodiversity and ecosystem functions. This study aims to analyze the dynamics of land use and land cover changes for sustainable natural resources planning and management. Chilalo-Galama Mountain Range, Ethiopia. This study used Thematic Mapper 05 (TM) for 1986, 2001 and Landsat 8 (OLI) data 2017. Additionally, data from the Central Statistics Agency on human population growth were analyzed. Semi-Automatic classification plugin (SCP) in QGIS 3.2.3 software was used for image classification. Global positioning system, field observations and focus group discussions were used for ground verification. Land Use Land Cover (LU/LC) change analysis was using maximum likelihood supervised classification and changes were calculated for the 1986–2001 and the 2001–2017 and 1986-2017 periods. The results show that agricultural land increased from 27.85% (1986) to 44.43% and 51.32% in 2001 and 2017, respectively with the overall accuracies of 92% (1986), 90.36% (2001), and 88% (2017). On the other hand, forests decreased from 8.51% (1986) to 7.64 (2001) and 4.46% (2017), and grassland decreased from 37.47% (1986) to 15.22%, and 15.01% in 2001 and 2017, respectively. It indicates for the years 1986–2017 the largest area cover gain of agricultural land was obtained from grassland. The matrix also shows that shrubland gained land from agricultural land, afro-alpine, and forest land. Population dynamics is found to be one of the major driving forces for the LU/LU changes in the study area.Keywords: Landsat, LU/LC change, Semi-Automatic classification plugin, population dynamics, Ethiopia
Procedia PDF Downloads 853160 In-door Localization Algorithm and Appropriate Implementation Using Wireless Sensor Networks
Authors: Adeniran K. Ademuwagun, Alastair Allen
Abstract:
The relationship dependence between RSS and distance in an enclosed environment is an important consideration because it is a factor that can influence the reliability of any localization algorithm founded on RSS. Several algorithms effectively reduce the variance of RSS to improve localization or accuracy performance. Our proposed algorithm essentially avoids this pitfall and consequently, its high adaptability in the face of erratic radio signal. Using 3 anchors in close proximity of each other, we are able to establish that RSS can be used as reliable indicator for localization with an acceptable degree of accuracy. Inherent in this concept, is the ability for each prospective anchor to validate (guarantee) the position or the proximity of the other 2 anchors involved in the localization and vice versa. This procedure ensures that the uncertainties of radio signals due to multipath effects in enclosed environments are minimized. A major driver of this idea is the implicit topological relationship among sensors due to raw radio signal strength. The algorithm is an area based algorithm; however, it does not trade accuracy for precision (i.e the size of the returned area).Keywords: anchor nodes, centroid algorithm, communication graph, radio signal strength
Procedia PDF Downloads 5083159 Clinical Feature Analysis and Prediction on Recurrence in Cervical Cancer
Authors: Ravinder Bahl, Jamini Sharma
Abstract:
The paper demonstrates analysis of the cervical cancer based on a probabilistic model. It involves technique for classification and prediction by recognizing typical and diagnostically most important test features relating to cervical cancer. The main contributions of the research include predicting the probability of recurrences in no recurrence (first time detection) cases. The combination of the conventional statistical and machine learning tools is applied for the analysis. Experimental study with real data demonstrates the feasibility and potential of the proposed approach for the said cause.Keywords: cervical cancer, recurrence, no recurrence, probabilistic, classification, prediction, machine learning
Procedia PDF Downloads 3603158 Portable System for the Acquisition and Processing of Electrocardiographic Signals to Obtain Different Metrics of Heart Rate Variability
Authors: Daniel F. Bohorquez, Luis M. Agudelo, Henry H. León
Abstract:
Heart rate variability (HRV) is defined as the temporary variation between heartbeats or RR intervals (distance between R waves in an electrocardiographic signal). This distance is currently a recognized biomarker. With the analysis of the distance, it is possible to assess the sympathetic and parasympathetic nervous systems. These systems are responsible for the regulation of the cardiac muscle. The analysis allows health specialists and researchers to diagnose various pathologies based on this variation. For the acquisition and analysis of HRV taken from a cardiac electrical signal, electronic equipment and analysis software that work independently are currently used. This complicates and delays the process of interpretation and diagnosis. With this delay, the health condition of patients can be put at greater risk. This can lead to an untimely treatment. This document presents a single portable device capable of acquiring electrocardiographic signals and calculating a total of 19 HRV metrics. This reduces the time required, resulting in a timelier intervention. The device has an electrocardiographic signal acquisition card attached to a microcontroller capable of transmitting the cardiac signal wirelessly to a mobile device. In addition, a mobile application was designed to analyze the cardiac waveform. The device calculates the RR and different metrics. The application allows a user to visualize in real-time the cardiac signal and the 19 metrics. The information is exported to a cloud database for remote analysis. The study was performed under controlled conditions in the simulated hospital of the Universidad de la Sabana, Colombia. A total of 60 signals were acquired and analyzed. The device was compared against two reference systems. The results show a strong level of correlation (r > 0.95, p < 0.05) between the 19 metrics compared. Therefore, the use of the portable system evaluated in clinical scenarios controlled by medical specialists and researchers is recommended for the evaluation of the condition of the cardiac system.Keywords: biological signal análisis, heart rate variability (HRV), HRV metrics, mobile app, portable device.
Procedia PDF Downloads 1843157 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data
Authors: M. Kharrat, G. Moreau, Z. Aboura
Abstract:
The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition
Procedia PDF Downloads 1553156 Quantitative Analysis of Multiprocessor Architectures for Radar Signal Processing
Authors: Deepak Kumar, Debasish Deb, Reena Mamgain
Abstract:
Radar signal processing requires high number crunching capability. Most often this is achieved using multiprocessor platform. Though multiprocessor platform provides the capability of meeting the real time computational challenges, the architecture of the same along with mapping of the algorithm on the architecture plays a vital role in efficiently using the platform. Towards this, along with standard performance metrics, few additional metrics are defined which helps in evaluating the multiprocessor platform along with the algorithm mapping. A generic multiprocessor architecture can not suit all the processing requirements. Depending on the system requirement and type of algorithms used, the most suitable architecture for the given problem is decided. In the paper, we study different architectures and quantify the different performance metrics which enables comparison of different architectures for their merit. We also carried out case study of different architectures and their efficiency depending on parallelism exploited on algorithm or data or both.Keywords: radar signal processing, multiprocessor architecture, efficiency, load imbalance, buffer requirement, pipeline, parallel, hybrid, cluster of processors (COPs)
Procedia PDF Downloads 4123155 RF Propagation Analysis in Outdoor Environments Using RSSI Measurements Applied in ZigBee Sensor Networks
Authors: Teles de Sales Bezerra, Saulo Aislan da Silva Eleuterio, José Anderson Rodrigues de Souza, Jeronimo Silva Rocha
Abstract:
Propagation in radio frequency is a constant concern in the application of Wireless Sensor Networks (WSN), the behavior of an environment determines how good the quality of signal reception. The objective of this paper is to analyze the behavior of a WSN in an environment for agriculture where environmental variables are present and correlate the capture of values received signal strength (RSSI) with a propagation model.Keywords: propagation, WSN, agriculture, quality
Procedia PDF Downloads 7553154 A Machine Learning Approach for Classification of Directional Valve Leakage in the Hydraulic Final Test
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Due to increasing cost pressure in global markets, artificial intelligence is becoming a technology that is decisive for competition. Predictive quality enables machinery and plant manufacturers to ensure product quality by using data-driven forecasts via machine learning models as a decision-making basis for test results. The use of cross-process Bosch production data along the value chain of hydraulic valves is a promising approach to classifying the quality characteristics of workpieces.Keywords: predictive quality, hydraulics, machine learning, classification, supervised learning
Procedia PDF Downloads 230