Search results for: fault detection and classification
4260 R-Killer: An Email-Based Ransomware Protection Tool
Authors: B. Lokuketagoda, M. Weerakoon, U. Madushan, A. N. Senaratne, K. Y. Abeywardena
Abstract:
Ransomware has become a common threat in past few years and the recent threat reports show an increase of growth in Ransomware infections. Researchers have identified different variants of Ransomware families since 2015. Lack of knowledge of the user about the threat is a major concern. Ransomware detection methodologies are still growing through the industry. Email is the easiest method to send Ransomware to its victims. Uninformed users tend to click on links and attachments without much consideration assuming the emails are genuine. As a solution to this in this paper R-Killer Ransomware detection tool is introduced. Tool can be integrated with existing email services. The core detection Engine (CDE) discussed in the paper focuses on separating suspicious samples from emails and handling them until a decision is made regarding the suspicious mail. It has the capability of preventing execution of identified ransomware processes. On the other hand, Sandboxing and URL analyzing system has the capability of communication with public threat intelligence services to gather known threat intelligence. The R-Killer has its own mechanism developed in its Proactive Monitoring System (PMS) which can monitor the processes created by downloaded email attachments and identify potential Ransomware activities. R-killer is capable of gathering threat intelligence without exposing the user’s data to public threat intelligence services, hence protecting the confidentiality of user data.Keywords: ransomware, deep learning, recurrent neural networks, email, core detection engine
Procedia PDF Downloads 2154259 A Less Complexity Deep Learning Method for Drones Detection
Authors: Mohamad Kassab, Amal El Fallah Seghrouchni, Frederic Barbaresco, Raed Abu Zitar
Abstract:
Detecting objects such as drones is a challenging task as their relative size and maneuvering capabilities deceive machine learning models and cause them to misclassify drones as birds or other objects. In this work, we investigate applying several deep learning techniques to benchmark real data sets of flying drones. A deep learning paradigm is proposed for the purpose of mitigating the complexity of those systems. The proposed paradigm consists of a hybrid between the AdderNet deep learning paradigm and the Single Shot Detector (SSD) paradigm. The goal was to minimize multiplication operations numbers in the filtering layers within the proposed system and, hence, reduce complexity. Some standard machine learning technique, such as SVM, is also tested and compared to other deep learning systems. The data sets used for training and testing were either complete or filtered in order to remove the images with mall objects. The types of data were RGB or IR data. Comparisons were made between all these types, and conclusions were presented.Keywords: drones detection, deep learning, birds versus drones, precision of detection, AdderNet
Procedia PDF Downloads 1824258 FACTS Based Stabilization for Smart Grid Applications
Authors: Adel. M. Sharaf, Foad H. Gandoman
Abstract:
Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PV-hybrid-Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid-Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6-pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.Keywords: AC FACTS, smart grid, stabilization, PV-battery storage, Switched Filter-Compensation (SFC)
Procedia PDF Downloads 4124257 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance
Authors: Abdullah Al Farwan, Ya Zhang
Abstract:
In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance
Procedia PDF Downloads 1664256 Achieving Product Robustness through Variation Simulation: An Industrial Case Study
Authors: Narendra Akhadkar, Philippe Delcambre
Abstract:
In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation
Procedia PDF Downloads 1644255 A Case Study on the Condition Monitoring of a Critical Machine in a Tyre Manufacturing Plant
Authors: Ramachandra C. G., Amarnath. M., Prashanth Pai M., Nagesh S. N.
Abstract:
The machine's performance level drops down over a period of time due to the wear and tear of its components. The early detection of an emergent fault becomes very vital in order to obtain uninterrupted production in a plant. Maintenance is an activity that helps to keep the machine's performance at an anticipated level, thereby ensuring the availability of the machine to perform its intended function. At present, a number of modern maintenance techniques are available, such as preventive maintenance, predictive maintenance, condition-based maintenance, total productive maintenance, etc. Condition-based maintenance or condition monitoring is one such modern maintenance technique in which the machine's condition or health is checked by the measurement of certain parameters such as sound level, temperature, velocity, displacement, vibration, etc. It can recognize most of the factors restraining the usefulness and efficacy of the total manufacturing unit. This research work is conducted on a Batch Mill in a tire production unit located in the Southern Karnataka region. The health of the mill is assessed using amplitude of vibration as a parameter of measurement. Most commonly, the vibration level is assessed using various points on the machine bearing. The normal or standard level is fixed using reference materials such as manuals or catalogs supplied by the manufacturers and also by referring vibration standards. The Rio-Vibro meter is placed in different locations on the batch-off mill to record the vibration data. The data collected are analyzed to identify the malfunctioning components in the batch off the mill, and corrective measures are suggested.Keywords: availability, displacement, vibration, rio-vibro, condition monitoring
Procedia PDF Downloads 914254 Dynamic Background Updating for Lightweight Moving Object Detection
Authors: Kelemewerk Destalem, Joongjae Cho, Jaeseong Lee, Ju H. Park, Joonhyuk Yoo
Abstract:
Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of a histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method.Keywords: background subtraction, background updating, real time, light weight algorithm, temporal difference
Procedia PDF Downloads 3424253 Financial Statement Fraud: The Need for a Paradigm Shift to Forensic Accounting
Authors: Ifedapo Francis Awolowo
Abstract:
The unrelenting series of embarrassing audit failures should stimulate a paradigm shift in accounting. And in this age of information revolution, there is need for a constant improvement on the products or services one offers to the market in order to be relevant. This study explores the perceptions of external auditors, forensic accountants and accounting academics on whether a paradigm shift to forensic accounting can reduce financial statement frauds. Through Neo-empiricism/inductive analytical approach, findings reveal that a paradigm shift to forensic accounting might be the right step in the right direction in order to increase the chances of fraud prevention and detection in the financial statement. This research has implication on accounting education on the need to incorporate forensic accounting into present day accounting curriculum. Accounting professional bodies, accounting standard setters and accounting firms all have roles to play in incorporating forensic accounting education into accounting curriculum. Particularly, there is need to alter the ISA 240 to make the prevention and detection of frauds the responsibilities of bot those charged with the management and governance of companies and statutory auditors.Keywords: financial statement fraud, forensic accounting, fraud prevention and detection, auditing, audit expectation gap, corporate governance
Procedia PDF Downloads 3664252 Modified Gold Screen Printed Electrode with Ruthenium Complex for Selective Detection of Porcine DNA
Authors: Siti Aishah Hasbullah
Abstract:
Studies on identification of pork content in food have grown rapidly to meet the Halal food standard in Malaysia. The used mitochondria DNA (mtDNA) approaches for the identification of pig species is thought to be the most precise marker due to the mtDNA genes are present in thousands of copies per cell, the large variability of mtDNA. The standard method commonly used for DNA detection is based on polymerase chain reaction (PCR) method combined with gel electrophoresis but has major drawback. Its major drawbacks are laborious, need longer time and toxic to handle. Therefore, the need for simplicity and fast assay of DNA is vital and has triggered us to develop DNA biosensors for porcine DNA detection. Therefore, the aim of this project is to develop electrochemical DNA biosensor based on ruthenium (II) complex, [Ru(bpy)2(p-PIP)]2+ as DNA hybridization label. The interaction of DNA and [Ru(bpy)2(p-HPIP)]2+ will be studied by electrochemical transduction using Gold Screen-Printed Electrode (GSPE) modified with gold nanoparticles (AuNPs) and succinimide acrylic microspheres. The electrochemical detection by redox active ruthenium (II) complex was measured by cyclic voltammetry (CV) and differential pulse voltammetry (DPV). The results indicate that the interaction of [Ru(bpy)2(PIP)]2+ with hybridization complementary DNA has higher response compared to single-stranded and mismatch complementary DNA. Under optimized condition, this porcine DNA biosensor incorporated modified GSPE shows good linear range towards porcine DNA.Keywords: gold, screen printed electrode, ruthenium, porcine DNA
Procedia PDF Downloads 3094251 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting
Authors: Kemal Polat
Abstract:
In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.Keywords: fuzzy C-means clustering, fuzzy C-means clustering based attribute weighting, Pima Indians diabetes, SVM
Procedia PDF Downloads 4134250 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface
Authors: Renata Gerhardt, Detlev Belder
Abstract:
Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS
Procedia PDF Downloads 2454249 Rapid and Sensitive Detection: Biosensors as an Innovative Analytical Tools
Authors: Sylwia Baluta, Joanna Cabaj, Karol Malecha
Abstract:
The evolution of biosensors was driven by the need for faster and more versatile analytical methods for application in important areas including clinical, diagnostics, food analysis or environmental monitoring, with minimum sample pretreatment. Rapid and sensitive neurotransmitters detection is extremely important in modern medicine. These compounds mainly occur in the brain and central nervous system of mammals. Any changes in the neurotransmitters concentration may lead to many diseases, such as Parkinson’s or schizophrenia. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements.Keywords: adrenaline, biosensor, dopamine, laccase, tyrosinase
Procedia PDF Downloads 1424248 The Impact of Recurring Events in Fake News Detection
Authors: Ali Raza, Shafiq Ur Rehman Khan, Raja Sher Afgun Usmani, Asif Raza, Basit Umair
Abstract:
Detection of Fake news and missing information is gaining popularity, especially after the advancement in social media and online news platforms. Social media platforms are the main and speediest source of fake news propagation, whereas online news websites contribute to fake news dissipation. In this study, we propose a framework to detect fake news using the temporal features of text and consider user feedback to identify whether the news is fake or not. In recent studies, the temporal features in text documents gain valuable consideration from Natural Language Processing and user feedback and only try to classify the textual data as fake or true. This research article indicates the impact of recurring and non-recurring events on fake and true news. We use two models BERT and Bi-LSTM to investigate, and it is concluded from BERT we get better results and 70% of true news are recurring and rest of 30% are non-recurring.Keywords: natural language processing, fake news detection, machine learning, Bi-LSTM
Procedia PDF Downloads 234247 Evaluating the Diagnostic Accuracy of the ctDNA Methylation for Liver Cancer
Authors: Maomao Cao
Abstract:
Objective: To test the performance of ctDNA methylation for the detection of liver cancer. Methods: A total of 1233 individuals have been recruited in 2017. 15 male and 15 female samples (including 10 cases of liver cancer) were randomly selected in the present study. CfDNA was extracted by MagPure Circulating DNA Maxi Kit. The concentration of cfDNA was obtained by Qubit™ dsDNA HS Assay Kit. A pre-constructed predictive model was used to analyze methylation data and to give a predictive score for each cfDNA sample. Individuals with a predictive score greater than or equal to 80 were classified as having liver cancer. CT tests were considered the gold standard. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the diagnosis of liver cancer were calculated. Results: 9 patients were diagnosed with liver cancer according to the prediction model (with high sensitivity and threshold of 80 points), with scores of 99.2, 91.9, 96.6, 92.4, 91.3, 92.5, 96.8, 91.1, and 92.2, respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of ctDNA methylation for the diagnosis of liver cancer were 0.70, 0.90, 0.78, and 0.86, respectively. Conclusions: ctDNA methylation could be an acceptable diagnostic modality for the detection of liver cancer.Keywords: liver cancer, ctDNA methylation, detection, diagnostic performance
Procedia PDF Downloads 1514246 Slice Bispectrogram Analysis-Based Classification of Environmental Sounds Using Convolutional Neural Network
Authors: Katsumi Hirata
Abstract:
Certain systems can function well only if they recognize the sound environment as humans do. In this research, we focus on sound classification by adopting a convolutional neural network and aim to develop a method that automatically classifies various environmental sounds. Although the neural network is a powerful technique, the performance depends on the type of input data. Therefore, we propose an approach via a slice bispectrogram, which is a third-order spectrogram and is a slice version of the amplitude for the short-time bispectrum. This paper explains the slice bispectrogram and discusses the effectiveness of the derived method by evaluating the experimental results using the ESC‑50 sound dataset. As a result, the proposed scheme gives high accuracy and stability. Furthermore, some relationship between the accuracy and non-Gaussianity of sound signals was confirmed.Keywords: environmental sound, bispectrum, spectrogram, slice bispectrogram, convolutional neural network
Procedia PDF Downloads 1264245 Assessment of the Landscaped Biodiversity in the National Park of Tlemcen (Algeria) Using Per-Object Analysis of Landsat Imagery
Authors: Bencherif Kada
Abstract:
In the forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape, and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification, that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction, and area of an object, etc.), and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify of the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak, and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants, and bare soils. Texture attributes seem to provide no useful information, while spatial attributes of shape and compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.Keywords: forest, oaks, remote sensing, diversity, shrublands
Procedia PDF Downloads 1244244 Audio Information Retrieval in Mobile Environment with Fast Audio Classifier
Authors: Bruno T. Gomes, José A. Menezes, Giordano Cabral
Abstract:
With the popularity of smartphones, mobile apps emerge to meet the diverse needs, however the resources at the disposal are limited, either by the hardware, due to the low computing power, or the software, that does not have the same robustness of desktop environment. For example, in automatic audio classification (AC) tasks, musical information retrieval (MIR) subarea, is required a fast processing and a good success rate. However the mobile platform has limited computing power and the best AC tools are only available for desktop. To solve these problems the fast classifier suits, to mobile environments, the most widespread MIR technologies, seeking a balance in terms of speed and robustness. At the end we found that it is possible to enjoy the best of MIR for mobile environments. This paper presents the results obtained and the difficulties encountered.Keywords: audio classification, audio extraction, environment mobile, musical information retrieval
Procedia PDF Downloads 5454243 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 1544242 Detection of Helicobacter Pylori by PCR and ELISA Methods in Patients with Hyperlipidemia
Authors: Simin Khodabakhshi, Hossein Rassi
Abstract:
Hyperlipidemia refers to any of several acquired or genetic disorders that result in a high level of lipids circulating in the blood. Helicobacter pylori infection is a contributing factor in the progression of hyperlipidemia with serum lipid changes. The aim of this study was to detect of Helicobacter pylori by PCR and serological methods in patients with hyperlipidemia. In this case-control study, 174 patients with hyperlipidemia and 174 healthy controls were studied. Also, demographics, physical and biochemical parameters were performed in all samples. The DNA extracted from blood specimens was amplified by H pylori cagA specific primers. The results show that H. pylori cagA positivity was detected in 79% of the hyperlipidemia and in 56% of the control group by ELISA test and 49% of the hyperlipidemia and in 24% of the control group by PCR test. Prevalence of H. pylori infection was significantly higher in hyperlipidemia as compared to controls. In addition, patients with hyperlipidemia had significantly higher values for triglyceride, total cholesterol, LDL-C, waist to hip ratio, body mass index, diastolic and systolic blood pressure and lower levels of HDL-C than control participants (all p < 0.0001). Our result detected the ELISA was a rapid and cost-effective detection and considering the high prevalence of cytotoxigenic H. pylori strains, cag A is suggested as a promising target for PCR and ELISA tests for detection of infection with toxigenic strains. In general, it can be concluded that molecular analysis of H. pylori cagA and clinical parameters are important in early detection of hyperlipidemia and atherosclerosis with H. pylori infection by PCR and ELISA tests.Keywords: Helicobacter pylori, hyperlipidemia, PCR, ELISA
Procedia PDF Downloads 1994241 Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection
Authors: Olesya Bolkhovskaya, Alexander Maltsev
Abstract:
Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signal is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work.Keywords: GLRT, Neumann-Pearson’s criterion, Test-statistics, degradation, spatial processing, multielement antenna array
Procedia PDF Downloads 3854240 Development of a Classification Model for Value-Added and Non-Value-Added Operations in Retail Logistics: Insights from a Supermarket Case Study
Authors: Helena Macedo, Larissa Tomaz, Levi Guimarães, Luís Cerqueira-Pinto, José Dinis-Carvalho
Abstract:
In the context of retail logistics, the pursuit of operational efficiency and cost optimization involves a rigorous distinction between value-added and non-value-added activities. In today's competitive market, optimizing efficiency and reducing operational costs are paramount for retail businesses. This research paper focuses on the development of a classification model adapted to the retail sector, specifically examining internal logistics processes. Based on a comprehensive analysis conducted in a retail supermarket located in the north of Portugal, which covered various aspects of internal retail logistics, this study questions the concept of value and the definition of wastes traditionally applied in a manufacturing context and proposes a new way to assess activities in the context of internal logistics. This study combines quantitative data analysis with qualitative evaluations. The proposed classification model offers a systematic approach to categorize operations within the retail logistics chain, providing actionable insights for decision-makers to streamline processes, enhance productivity, and allocate resources more effectively. This model contributes not only to academic discourse but also serves as a practical tool for retail businesses, aiding in the enhancement of their internal logistics dynamics.Keywords: lean retail, lean logisitcs, retail logistics, value-added and non-value-added
Procedia PDF Downloads 664239 Protein Remote Homology Detection by Using Profile-Based Matrix Transformation Approaches
Authors: Bin Liu
Abstract:
As one of the most important tasks in protein sequence analysis, protein remote homology detection has been studied for decades. Currently, the profile-based methods show state-of-the-art performance. Position-Specific Frequency Matrix (PSFM) is widely used profile. However, there exists noise information in the profiles introduced by the amino acids with low frequencies. In this study, we propose a method to remove the noise information in the PSFM by removing the amino acids with low frequencies called Top frequency profile (TFP). Three new matrix transformation methods, including Autocross covariance (ACC) transformation, Tri-gram, and K-separated bigram (KSB), are performed on these profiles to convert them into fixed length feature vectors. Combined with Support Vector Machines (SVMs), the predictors are constructed. Evaluated on two benchmark datasets, and experimental results show that these proposed methods outperform other state-of-the-art predictors.Keywords: protein remote homology detection, protein fold recognition, top frequency profile, support vector machines
Procedia PDF Downloads 1254238 Multi Biomertric Personal Identification System Based On Hybird Intellegence Method
Authors: Laheeb M. Ibrahim, Ibrahim A. Salih
Abstract:
Biometrics is a technology that has been widely used in many official and commercial identification applications. The increased concerns in security during recent years (especially during the last decades) have essentially resulted in more attention being given to biometric-based verification techniques. Here, a novel fusion approach of palmprint, dental traits has been suggested. These traits which are authentication techniques have been employed in a range of biometric applications that can identify any postmortem PM person and antemortem AM. Besides improving the accuracy, the fusion of biometrics has several advantages such as increasing, deterring spoofing activities and reducing enrolment failure. In this paper, a first unimodel biometric system has been made by using (palmprint and dental) traits, for each one classification applying an artificial neural network and a hybrid technique that combines swarm intelligence and neural network together, then attempt has been made to combine palmprint and dental biometrics. Principally, the fusion of palmprint and dental biometrics and their potential application has been explored as biometric identifiers. To address this issue, investigations have been carried out about the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. Also the results of the multimodal approach have been compared with each one of these two traits authentication approaches. This paper studies the features and decision fusion levels in multimodal biometrics. To determine the accuracy of GAR to parallel system decision-fusion including (AND, OR, Majority fating) has been used. The backpropagation method has been used for classification and has come out with result (92%, 99%, 97%) respectively for GAR, while the GAR) for this algorithm using hybrid technique for classification (95%, 99%, 98%) respectively. To determine the accuracy of the multibiometric system for feature level fusion has been used, while the same preceding methods have been used for classification. The results have been (98%, 99%) respectively while to determine the GAR of feature level different methods have been used and have come out with (98%).Keywords: back propagation neural network BP ANN, multibiometric system, parallel system decision-fusion, practical swarm intelligent PSO
Procedia PDF Downloads 5334237 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue
Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov
Abstract:
The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport
Procedia PDF Downloads 1144236 Local Boundary Analysis for Generative Theory of Tonal Music: From the Aspect of Classic Music Melody Analysis
Authors: Po-Chun Wang, Yan-Ru Lai, Sophia I. C. Lin, Alvin W. Y. Su
Abstract:
The Generative Theory of Tonal Music (GTTM) provides systematic approaches to recognizing local boundaries of music. The rules have been implemented in some automated melody segmentation algorithms. Besides, there are also deep learning methods with GTTM features applied to boundary detection tasks. However, these studies might face constraints such as a lack of or inconsistent label data. The GTTM database is currently the most widely used GTTM database, which includes manually labeled GTTM rules and local boundaries. Even so, we found some problems with these labels. They are sometimes discrepancies with GTTM rules. In addition, since it is labeled at different times by multiple musicians, they are not within the same scope in some cases. Therefore, in this paper, we examine this database with musicians from the aspect of classical music and relabel the scores. The relabeled database - GTTM Database v2.0 - will be released for academic research usage. Despite the experimental and statistical results showing that the relabeled database is more consistent, the improvement in boundary detection is not substantial. It seems that we need more clues than GTTM rules for boundary detection in the future.Keywords: dataset, GTTM, local boundary, neural network
Procedia PDF Downloads 1464235 Sentiment Analysis of Ensemble-Based Classifiers for E-Mail Data
Authors: Muthukumarasamy Govindarajan
Abstract:
Detection of unwanted, unsolicited mails called spam from email is an interesting area of research. It is necessary to evaluate the performance of any new spam classifier using standard data sets. Recently, ensemble-based classifiers have gained popularity in this domain. In this research work, an efficient email filtering approach based on ensemble methods is addressed for developing an accurate and sensitive spam classifier. The proposed approach employs Naive Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA) as base classifiers along with different ensemble methods. The experimental results show that the ensemble classifier was performing with accuracy greater than individual classifiers, and also hybrid model results are found to be better than the combined models for the e-mail dataset. The proposed ensemble-based classifiers turn out to be good in terms of classification accuracy, which is considered to be an important criterion for building a robust spam classifier.Keywords: accuracy, arcing, bagging, genetic algorithm, Naive Bayes, sentiment mining, support vector machine
Procedia PDF Downloads 1424234 Development of an Electrochemical Aptasensor for the Detection of Human Osteopontin Protein
Authors: Sofia G. Meirinho, Luis G. Dias, António M. Peres, Lígia R. Rodrigues
Abstract:
The emerging development of electrochemical aptasen sors has enabled the easy and fast detection of protein biomarkers in standard and real samples. Biomarkers are produced by body organs or tumours and provide a measure of antigens on cell surfaces. When detected in high amounts in blood, they can be suggestive of tumour activity. These biomarkers are more often used to evaluate treatment effects or to assess the potential for metastatic disease in patients with established disease. Osteopontin (OPN) is a protein found in all body fluids and constitutes a possible biomarker because its overexpression has been related with breast cancer evolution and metastasis. Currently, biomarkers are commonly used for the development of diagnostic methods, allowing the detection of the disease in its initial stages. A previously described RNA aptamer was used in the current work to develop a simple and sensitive electrochemical aptasensor with high affinity for human OPN. The RNA aptamer was biotinylated and immobilized on a gold electrode by avidin-biotin interaction. The electrochemical signal generated from the aptamer–target molecule interaction was monitored electrochemically using cyclic voltammetry in the presence of [Fe (CN) 6]−3/− as a redox probe. The signal observed showed a current decrease due to the binding of OPN. The preliminary results showed that this aptasensor enables the detection of OPN in standard solutions, showing good selectivity towards the target in the presence of others interfering proteins such as bovine OPN and bovine serum albumin. The results gathered in the current work suggest that the proposed electrochemical aptasensor is a simple and sensitive detection tool for human OPN and so, may have future applications in cancer disease monitoring.Keywords: osteopontin, aptamer, aptasensor, screen-printed electrode, cyclic voltammetry
Procedia PDF Downloads 4314233 Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy
Authors: Shu-Min Tsao, Chung-Ming Lo, Shao-Chun Chen
Abstract:
Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient.Keywords: computer-aided diagnosis, diabetic retinopathy, exudate, image processing
Procedia PDF Downloads 2714232 Development of a Semiconductor Material Based on Functionalized Graphene: Application to the Detection of Nitrogen Oxides (NOₓ)
Authors: Djamil Guettiche, Ahmed Mekki, Tighilt Fatma-Zohra, Rachid Mahmoud
Abstract:
The aim of this study was to synthesize and characterize conducting polymer composites of polypyrrole and graphene, including pristine and surface-treated graphene (PPy/GO, PPy/rGO, and PPy/rGO-ArCOOH), for use as sensitive elements in a homemade chemiresistive module for on-line detection of nitrogen oxides vapors. The chemiresistive module was prepared, characterized, and evaluated for performance. Structural and morphological characterizations of the composite were carried out using FTIR, Raman spectroscopy, and XRD analyses. After exposure to NO and NO₂ gases in both static and dynamic modes, the sensitivity, selectivity, limit of detection, and response time of the sensor were determined at ambient temperature. The resulting sensor showed high sensitivity, selectivity, and reversibility, with a low limit of detection of 1 ppm. A composite of polypyrrole and graphene functionalized with aryl 4-carboxy benzene diazonium salt was synthesized and characterized using FTIR, scanning electron microscopy, transmission electron microscopy, UV-visible, and X-ray diffraction. The PPy-rGOArCOOH composite exhibited a good electrical resistance response to NO₂ at room temperature and showed enhanced NO₂-sensing properties compared to PPy-rGO thin films. The selectivity and stability of the NO₂ sensor based on the PPy/rGO-ArCOOH nanocomposite were also investigated.Keywords: conducting polymers, surface treated graphene, diazonium salt, polypyrrole, Nitrogen oxide sensing
Procedia PDF Downloads 784231 Mapping Forest Biodiversity Using Remote Sensing and Field Data in the National Park of Tlemcen (Algeria)
Authors: Bencherif Kada
Abstract:
In forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects, and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction and area of an object, etc.) and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants and bare soils. Texture attributes seem to provide no useful information while spatial attributes of shape, compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.Keywords: forest, oaks, remote sensing, biodiversity, shrublands
Procedia PDF Downloads 30