Search results for: preposition error detection
4415 Performance Assessment of GSO Satellites before and after Enhancing the Pointing Effect
Authors: Amr Emam, Joseph Victor, Mohamed Abd Elghany
Abstract:
The paper presents the effect of the orbit inclination on the pointing error of the satellite antenna and consequently on its footprint on earth for a typical Ku- band payload system. The performance assessment is examined both theoretically and by means of practical measurements, taking also into account all additional sources of pointing errors, such as East-West station keeping, orbit eccentricity and actual attitude control performance. An implementation and computation of the sinusoidal biases in satellite roll and pitch used to compensate the pointing error of the satellite antenna coverage is studied and evaluated before and after the pointing corrections performed. A method for evaluation of the performance of the implemented biases has been introduced through measuring satellite received level from a tracking 11m and fixed 4.8m transmitting antenna before and after the implementation of the pointing corrections.Keywords: satellite, inclined orbit, pointing errors, coverage optimization
Procedia PDF Downloads 4084414 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 1304413 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model
Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon
Abstract:
Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators
Procedia PDF Downloads 4054412 The Modeling and Effectiveness Evaluation for Vessel Evasion to Acoustic Homing Torpedo
Authors: Li Minghui, Min Shaorong, Zhang Jun
Abstract:
This paper aims for studying the operational efficiency of surface warship’s motorized evasion to acoustic homing torpedo. It orderly developed trajectory model, self-guide detection model, vessel evasion model, as well as anti-torpedo error model in three-dimensional space to make up for the deficiency of precious researches analyzing two-dimensionally confrontational models. Then, making use of the Monte Carlo method, it carried out the simulation for the confrontation process of evasion in the environment of MATLAB. At last, it quantitatively analyzed the main factors which determine vessel’s survival probability. The results show that evasion relative bearing and speed will affect vessel’s survival probability significantly. Thus, choosing appropriate evasion relative bearing and speed according to alarming range and alarming relative bearing for torpedo, improving alarming range and positioning accuracy and reducing the response time against torpedo will improve the vessel’s survival probability significantly.Keywords: acoustic homing torpedo, vessel evasion, monte carlo method, torpedo defense, vessel's survival probability
Procedia PDF Downloads 4604411 An Adaptive Cooperative Scheme for Reliability of Transmission Using STBC and CDD in Wireless Communications
Authors: Hyun-Jun Shin, Jae-Jeong Kim, Hyoung-Kyu Song
Abstract:
In broadcasting and cellular system, a cooperative scheme is proposed for the improvement of performance of bit error rate. Up to date, the coverage of broadcasting system coexists with the coverage of cellular system. Therefore each user in a cellular coverage is frequently involved in a broadcasting coverage. The proposed cooperative scheme is derived from the shared areas. The users receive signals from both broadcasting base station and cellular base station. The proposed scheme selects a cellular base station of a worse channel to achieve better performance of bit error rate in cooperation. The performance of the proposed scheme is evaluated in fading channel.Keywords: cooperative communication, diversity, STBC, CDD, channel condition, broadcasting system, cellular system
Procedia PDF Downloads 5114410 High Accuracy Analytic Approximation for Special Functions Applied to Bessel Functions J₀(x) and Its Zeros
Authors: Fernando Maass, Pablo Martin, Jorge Olivares
Abstract:
The Bessel function J₀(x) is very important in Electrodynamics and Physics, as well as its zeros. In this work, a method to obtain high accuracy approximation is presented through an application to that function. In most of the applications of this function, the values of the zeros are very important. In this work, analytic approximations for this function have been obtained valid for all positive values of the variable x, which have high accuracy for the function as well as for the zeros. The approximation is determined by the simultaneous used of the power series and asymptotic expansion. The structure of the approximation is a combination of two rational functions with elementary functions as trigonometric and fractional powers. Here us in Pade method, rational functions are used, but now there combined with elementary functions us fractional powers hyperbolic or trigonometric functions, and others. The reason of this is that now power series of the exact function are used, but together with the asymptotic expansion, which usually includes fractional powers trigonometric functions and other type of elementary functions. The approximation must be a bridge between both expansions, and this can not be accomplished using only with rational functions. In the simplest approximation using 4 parameters the maximum absolute error is less than 0.006 at x ∼ 4.9. In this case also the maximum relative error for the zeros is less than 0.003 which is for the second zero, but that value decreases rapidly for the other zeros. The same kind of behaviour happens for the relative error of the maximum and minimum of the functions. Approximations with higher accuracy and more parameters will be also shown. All the approximations are valid for any positive value of x, and they can be calculated easily.Keywords: analytic approximations, asymptotic approximations, Bessel functions, quasirational approximations
Procedia PDF Downloads 2584409 Multi-Criteria Evaluation of IDS Architectures in Cloud Computing
Authors: Elmahdi Khalil, Saad Enniari, Mostapha Zbakh
Abstract:
Cloud computing promises to increase innovation and the velocity with witch applications are deployed, all while helping any enterprise meet most IT service needs at a lower total cost of ownership and higher return investment. As the march of cloud continues, it brings both new opportunities and new security challenges. To take advantages of those opportunities while minimizing risks, we think that Intrusion Detection Systems (IDS) integrated in the cloud is one of the best existing solutions nowadays in the field. The concept of intrusion detection was known since past and was first proposed by a well-known researcher named Anderson in 1980's. Since that time IDS's are evolving. Although, several efforts has been made in the area of Intrusion Detection systems for cloud computing environment, many attacks still prevail. Therefore, the work presented in this paper proposes a multi criteria analysis and a comparative study between several IDS architectures designated to work in a cloud computing environments. To achieve this objective, in the first place we will search in the state of the art of several consistent IDS architectures designed to work in a cloud environment. Whereas, in a second step we will establish the criteria that will be useful for the evaluation of architectures. Later, using the approach of multi criteria decision analysis Mac Beth (Measuring Attractiveness by a Categorical Based Evaluation Technique we will evaluate the criteria and assign to each one the appropriate weight according to their importance in the field of IDS architectures in cloud computing. The last step is to evaluate architectures against the criteria and collecting results of the model constructed in the previous steps.Keywords: cloud computing, cloud security, intrusion detection/prevention system, multi-criteria decision analysis
Procedia PDF Downloads 4764408 Strategy in Practice: Strategy Development, Strategic Error and Project Delivery
Authors: Nipun Agarwal, David Paul, Fareed Un Din
Abstract:
Strategy development and implementation is the key to an organization’s success in today’s competitive marketplace. Many organizations develop excellent strategy but are unable to implement this strategy in order to succeed. The difference between strategic goals and its implementation is called strategic error. Strategic error occurs when an organization does not have structures in place to implement their strategy. Strategy implementation happens through projects and having a project management method that provides certainty and agility will help an organization become more competitive in implementing strategy. Numerous project management methods exist in theory and practice. However, projects mainly used the Waterfall method in the past that provides certainty in terms of budget, delivery date and resourcing. It is common practice now to utilise Agile based methods. However, Agile based methods do not provide specific deadlines and budgets. But provide agility in product design and project delivery, which is useful to companies. Both Waterfall and Agile methods in some forms are the opposites of each other. Executive management prefer agility in delivery projects as the competitive landscape changes frequently. However, they also appreciate certainty in the projects being able to quantify budgets, deadlines and resources that is harder for an Agile based method to provide. This paper attempts to develop a hybrid project management method that attempts to merge these Waterfall and Agile methods to provide the positives from both these approaches.Keywords: strategy, project management, strategy implementation, agile
Procedia PDF Downloads 1204407 Experimenting with Error Performance of Systems Employing Pulse Shaping Filters on a Software-Defined-Radio Platform
Authors: Chia-Yu Yao
Abstract:
This paper presents experimental results on testing the symbol-error-rate (SER) performance of quadrature amplitude modulation (QAM) systems employing symmetric pulse-shaping square-root (SR) filters designed by minimizing the roughness function and by minimizing the peak-to-average power ratio (PAR). The device used in the experiments is the 'bladeRF' software-defined-radio platform. PAR is a well-known measurement, whereas the roughness function is a concept for measuring the jitter-induced interference. The experimental results show that the system employing minimum-roughness pulse-shaping SR filters outperforms the system employing minimum-PAR pulse-shaping SR filters in the sense of SER performance.Keywords: pulse-shaping filters, FIR filters, jittering, QAM
Procedia PDF Downloads 3444406 R-Killer: An Email-Based Ransomware Protection Tool
Authors: B. Lokuketagoda, M. Weerakoon, U. Madushan, A. N. Senaratne, K. Y. Abeywardena
Abstract:
Ransomware has become a common threat in past few years and the recent threat reports show an increase of growth in Ransomware infections. Researchers have identified different variants of Ransomware families since 2015. Lack of knowledge of the user about the threat is a major concern. Ransomware detection methodologies are still growing through the industry. Email is the easiest method to send Ransomware to its victims. Uninformed users tend to click on links and attachments without much consideration assuming the emails are genuine. As a solution to this in this paper R-Killer Ransomware detection tool is introduced. Tool can be integrated with existing email services. The core detection Engine (CDE) discussed in the paper focuses on separating suspicious samples from emails and handling them until a decision is made regarding the suspicious mail. It has the capability of preventing execution of identified ransomware processes. On the other hand, Sandboxing and URL analyzing system has the capability of communication with public threat intelligence services to gather known threat intelligence. The R-Killer has its own mechanism developed in its Proactive Monitoring System (PMS) which can monitor the processes created by downloaded email attachments and identify potential Ransomware activities. R-killer is capable of gathering threat intelligence without exposing the user’s data to public threat intelligence services, hence protecting the confidentiality of user data.Keywords: ransomware, deep learning, recurrent neural networks, email, core detection engine
Procedia PDF Downloads 2194405 A Less Complexity Deep Learning Method for Drones Detection
Authors: Mohamad Kassab, Amal El Fallah Seghrouchni, Frederic Barbaresco, Raed Abu Zitar
Abstract:
Detecting objects such as drones is a challenging task as their relative size and maneuvering capabilities deceive machine learning models and cause them to misclassify drones as birds or other objects. In this work, we investigate applying several deep learning techniques to benchmark real data sets of flying drones. A deep learning paradigm is proposed for the purpose of mitigating the complexity of those systems. The proposed paradigm consists of a hybrid between the AdderNet deep learning paradigm and the Single Shot Detector (SSD) paradigm. The goal was to minimize multiplication operations numbers in the filtering layers within the proposed system and, hence, reduce complexity. Some standard machine learning technique, such as SVM, is also tested and compared to other deep learning systems. The data sets used for training and testing were either complete or filtered in order to remove the images with mall objects. The types of data were RGB or IR data. Comparisons were made between all these types, and conclusions were presented.Keywords: drones detection, deep learning, birds versus drones, precision of detection, AdderNet
Procedia PDF Downloads 1844404 Dynamic Background Updating for Lightweight Moving Object Detection
Authors: Kelemewerk Destalem, Joongjae Cho, Jaeseong Lee, Ju H. Park, Joonhyuk Yoo
Abstract:
Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of a histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method.Keywords: background subtraction, background updating, real time, light weight algorithm, temporal difference
Procedia PDF Downloads 3474403 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement
Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad
Abstract:
An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter
Procedia PDF Downloads 4024402 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria
Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter
Abstract:
Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis
Procedia PDF Downloads 834401 Financial Statement Fraud: The Need for a Paradigm Shift to Forensic Accounting
Authors: Ifedapo Francis Awolowo
Abstract:
The unrelenting series of embarrassing audit failures should stimulate a paradigm shift in accounting. And in this age of information revolution, there is need for a constant improvement on the products or services one offers to the market in order to be relevant. This study explores the perceptions of external auditors, forensic accountants and accounting academics on whether a paradigm shift to forensic accounting can reduce financial statement frauds. Through Neo-empiricism/inductive analytical approach, findings reveal that a paradigm shift to forensic accounting might be the right step in the right direction in order to increase the chances of fraud prevention and detection in the financial statement. This research has implication on accounting education on the need to incorporate forensic accounting into present day accounting curriculum. Accounting professional bodies, accounting standard setters and accounting firms all have roles to play in incorporating forensic accounting education into accounting curriculum. Particularly, there is need to alter the ISA 240 to make the prevention and detection of frauds the responsibilities of bot those charged with the management and governance of companies and statutory auditors.Keywords: financial statement fraud, forensic accounting, fraud prevention and detection, auditing, audit expectation gap, corporate governance
Procedia PDF Downloads 3714400 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 1004399 Modified Gold Screen Printed Electrode with Ruthenium Complex for Selective Detection of Porcine DNA
Authors: Siti Aishah Hasbullah
Abstract:
Studies on identification of pork content in food have grown rapidly to meet the Halal food standard in Malaysia. The used mitochondria DNA (mtDNA) approaches for the identification of pig species is thought to be the most precise marker due to the mtDNA genes are present in thousands of copies per cell, the large variability of mtDNA. The standard method commonly used for DNA detection is based on polymerase chain reaction (PCR) method combined with gel electrophoresis but has major drawback. Its major drawbacks are laborious, need longer time and toxic to handle. Therefore, the need for simplicity and fast assay of DNA is vital and has triggered us to develop DNA biosensors for porcine DNA detection. Therefore, the aim of this project is to develop electrochemical DNA biosensor based on ruthenium (II) complex, [Ru(bpy)2(p-PIP)]2+ as DNA hybridization label. The interaction of DNA and [Ru(bpy)2(p-HPIP)]2+ will be studied by electrochemical transduction using Gold Screen-Printed Electrode (GSPE) modified with gold nanoparticles (AuNPs) and succinimide acrylic microspheres. The electrochemical detection by redox active ruthenium (II) complex was measured by cyclic voltammetry (CV) and differential pulse voltammetry (DPV). The results indicate that the interaction of [Ru(bpy)2(PIP)]2+ with hybridization complementary DNA has higher response compared to single-stranded and mismatch complementary DNA. Under optimized condition, this porcine DNA biosensor incorporated modified GSPE shows good linear range towards porcine DNA.Keywords: gold, screen printed electrode, ruthenium, porcine DNA
Procedia PDF Downloads 3124398 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface
Authors: Renata Gerhardt, Detlev Belder
Abstract:
Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS
Procedia PDF Downloads 2504397 Rapid and Sensitive Detection: Biosensors as an Innovative Analytical Tools
Authors: Sylwia Baluta, Joanna Cabaj, Karol Malecha
Abstract:
The evolution of biosensors was driven by the need for faster and more versatile analytical methods for application in important areas including clinical, diagnostics, food analysis or environmental monitoring, with minimum sample pretreatment. Rapid and sensitive neurotransmitters detection is extremely important in modern medicine. These compounds mainly occur in the brain and central nervous system of mammals. Any changes in the neurotransmitters concentration may lead to many diseases, such as Parkinson’s or schizophrenia. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements.Keywords: adrenaline, biosensor, dopamine, laccase, tyrosinase
Procedia PDF Downloads 1474396 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 4924395 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 204394 The Impact of Recurring Events in Fake News Detection
Authors: Ali Raza, Shafiq Ur Rehman Khan, Raja Sher Afgun Usmani, Asif Raza, Basit Umair
Abstract:
Detection of Fake news and missing information is gaining popularity, especially after the advancement in social media and online news platforms. Social media platforms are the main and speediest source of fake news propagation, whereas online news websites contribute to fake news dissipation. In this study, we propose a framework to detect fake news using the temporal features of text and consider user feedback to identify whether the news is fake or not. In recent studies, the temporal features in text documents gain valuable consideration from Natural Language Processing and user feedback and only try to classify the textual data as fake or true. This research article indicates the impact of recurring and non-recurring events on fake and true news. We use two models BERT and Bi-LSTM to investigate, and it is concluded from BERT we get better results and 70% of true news are recurring and rest of 30% are non-recurring.Keywords: natural language processing, fake news detection, machine learning, Bi-LSTM
Procedia PDF Downloads 294393 Evaluating the Diagnostic Accuracy of the ctDNA Methylation for Liver Cancer
Authors: Maomao Cao
Abstract:
Objective: To test the performance of ctDNA methylation for the detection of liver cancer. Methods: A total of 1233 individuals have been recruited in 2017. 15 male and 15 female samples (including 10 cases of liver cancer) were randomly selected in the present study. CfDNA was extracted by MagPure Circulating DNA Maxi Kit. The concentration of cfDNA was obtained by Qubit™ dsDNA HS Assay Kit. A pre-constructed predictive model was used to analyze methylation data and to give a predictive score for each cfDNA sample. Individuals with a predictive score greater than or equal to 80 were classified as having liver cancer. CT tests were considered the gold standard. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the diagnosis of liver cancer were calculated. Results: 9 patients were diagnosed with liver cancer according to the prediction model (with high sensitivity and threshold of 80 points), with scores of 99.2, 91.9, 96.6, 92.4, 91.3, 92.5, 96.8, 91.1, and 92.2, respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of ctDNA methylation for the diagnosis of liver cancer were 0.70, 0.90, 0.78, and 0.86, respectively. Conclusions: ctDNA methylation could be an acceptable diagnostic modality for the detection of liver cancer.Keywords: liver cancer, ctDNA methylation, detection, diagnostic performance
Procedia PDF Downloads 1584392 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method
Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson
Abstract:
Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.Keywords: fault detection, ground robot, inverse simulation, rover
Procedia PDF Downloads 3104391 A Visual Inspection System for Automotive Sheet Metal Chasis Parts Produced with Cold-Forming Method
Authors: İmren Öztürk Yılmaz, Abdullah Yasin Bilici, Yasin Atalay Candemir
Abstract:
The system consists of 4 main elements: motion system, image acquisition system, image processing software, and control interface. The parts coming out of the production line to enter the image processing system with the conveyor belt at the end of the line. The 3D scanning of the produced part is performed with the laser scanning system integrated into the system entry side. With the 3D scanning method, it is determined at what position and angle the parts enter the system, and according to the data obtained, parameters such as part origin and conveyor speed are calculated with the designed software, and the robot is informed about the position where it will take part. The robot, which receives the information, takes the produced part on the belt conveyor and shows it to high-resolution cameras for quality control. Measurement processes are carried out with a maximum error of 20 microns determined by the experiments.Keywords: quality control, industry 4.0, image processing, automated fault detection, digital visual inspection
Procedia PDF Downloads 1174390 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 1594389 Detection of Helicobacter Pylori by PCR and ELISA Methods in Patients with Hyperlipidemia
Authors: Simin Khodabakhshi, Hossein Rassi
Abstract:
Hyperlipidemia refers to any of several acquired or genetic disorders that result in a high level of lipids circulating in the blood. Helicobacter pylori infection is a contributing factor in the progression of hyperlipidemia with serum lipid changes. The aim of this study was to detect of Helicobacter pylori by PCR and serological methods in patients with hyperlipidemia. In this case-control study, 174 patients with hyperlipidemia and 174 healthy controls were studied. Also, demographics, physical and biochemical parameters were performed in all samples. The DNA extracted from blood specimens was amplified by H pylori cagA specific primers. The results show that H. pylori cagA positivity was detected in 79% of the hyperlipidemia and in 56% of the control group by ELISA test and 49% of the hyperlipidemia and in 24% of the control group by PCR test. Prevalence of H. pylori infection was significantly higher in hyperlipidemia as compared to controls. In addition, patients with hyperlipidemia had significantly higher values for triglyceride, total cholesterol, LDL-C, waist to hip ratio, body mass index, diastolic and systolic blood pressure and lower levels of HDL-C than control participants (all p < 0.0001). Our result detected the ELISA was a rapid and cost-effective detection and considering the high prevalence of cytotoxigenic H. pylori strains, cag A is suggested as a promising target for PCR and ELISA tests for detection of infection with toxigenic strains. In general, it can be concluded that molecular analysis of H. pylori cagA and clinical parameters are important in early detection of hyperlipidemia and atherosclerosis with H. pylori infection by PCR and ELISA tests.Keywords: Helicobacter pylori, hyperlipidemia, PCR, ELISA
Procedia PDF Downloads 2024388 Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection
Authors: Olesya Bolkhovskaya, Alexander Maltsev
Abstract:
Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signal is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work.Keywords: GLRT, Neumann-Pearson’s criterion, Test-statistics, degradation, spatial processing, multielement antenna array
Procedia PDF Downloads 3884387 Protein Remote Homology Detection by Using Profile-Based Matrix Transformation Approaches
Authors: Bin Liu
Abstract:
As one of the most important tasks in protein sequence analysis, protein remote homology detection has been studied for decades. Currently, the profile-based methods show state-of-the-art performance. Position-Specific Frequency Matrix (PSFM) is widely used profile. However, there exists noise information in the profiles introduced by the amino acids with low frequencies. In this study, we propose a method to remove the noise information in the PSFM by removing the amino acids with low frequencies called Top frequency profile (TFP). Three new matrix transformation methods, including Autocross covariance (ACC) transformation, Tri-gram, and K-separated bigram (KSB), are performed on these profiles to convert them into fixed length feature vectors. Combined with Support Vector Machines (SVMs), the predictors are constructed. Evaluated on two benchmark datasets, and experimental results show that these proposed methods outperform other state-of-the-art predictors.Keywords: protein remote homology detection, protein fold recognition, top frequency profile, support vector machines
Procedia PDF Downloads 1274386 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue
Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov
Abstract:
The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport
Procedia PDF Downloads 118