Search results for: fault detection and classification
2583 Pre-Industrial Local Architecture According to Natural Properties
Authors: Selin Küçük
Abstract:
Pre-industrial architecture is integration of natural and subsequent properties by intelligence and experience. Since various settlements relatively industrialized or non-industrialized at any time, ‘pre-industrial’ term does not refer to a definite time. Natural properties, which are existent conditions and materials in natural local environment, are climate, geomorphology and local materials. Subsequent properties, which are all anthropological comparatives, are culture of societies, requirements of people and construction techniques that people use. Yet, after industrialization, technology took technique’s place, cultural effects are manipulated, requirements are changed and local/natural properties are almost disappeared in architecture. Technology is universal, global and expands simply; conversely technique is time and experience dependent and should has a considerable cultural background. This research is about construction techniques according to natural properties of a region and classification of these techniques. Understanding local architecture is only possible by searching its background which is hard to reach. There are always changes in positive and negative in architectural techniques through the time. Archaeological layers of a region sometimes give more accurate information about transformation of architecture. However, natural properties of any region are the most helpful elements to perceive construction techniques. Many international sources from different cultures are interested in local architecture by mentioning natural properties separately. Unfortunately, there is no literature deals with this subject as far as systematically in the correct way. This research aims to improve a clear perspective of local architecture existence by categorizing archetypes according to natural properties. The ultimate goal of this research is generating a clear classification of local architecture independent from subsequent (anthropological) properties over the world such like a handbook. Since local architecture is the most sustainable architecture with refer to its economic, ecologic and sociological properties, there should be an excessive information about construction techniques to be learned from. Constructing the same buildings in all over the world is one of the main criticism of modern architectural system. While this critics going on, the same buildings without identity increase incrementally. In post-industrial term, technology widely took technique’s place, yet cultural effects are manipulated, requirements are changed and natural local properties are almost disappeared in architecture. These study does not offer architects to use local techniques, but it indicates the progress of pre-industrial architectural evolution which is healthier, cheaper and natural. Immigration from rural areas to developing/developed cities should be prohibited, thus culture and construction techniques can be preserved. Since big cities have psychological, sensational and sociological impact on people, rural settlers can be convinced to not to immigrate by providing new buildings designed according to natural properties and maintaining their settlements. Improving rural conditions would remove the economical and sociological gulf between cities and rural. What result desired to arrived in, is if there is no deformation (adaptation process of another traditional buildings because of immigration) or assimilation in a climatic region, there should be very similar solutions in the same climatic regions of the world even if there is no relationship (trade, communication etc.) among them.Keywords: climate zones, geomorphology, local architecture, local materials
Procedia PDF Downloads 4302582 Identification of Breast Anomalies Based on Deep Convolutional Neural Networks and K-Nearest Neighbors
Authors: Ayyaz Hussain, Tariq Sadad
Abstract:
Breast cancer (BC) is one of the widespread ailments among females globally. The early prognosis of BC can decrease the mortality rate. Exact findings of benign tumors can avoid unnecessary biopsies and further treatments of patients under investigation. However, due to variations in images, it is a tough job to isolate cancerous cases from normal and benign ones. The machine learning technique is widely employed in the classification of BC pattern and prognosis. In this research, a deep convolution neural network (DCNN) called AlexNet architecture is employed to get more discriminative features from breast tissues. To achieve higher accuracy, K-nearest neighbor (KNN) classifiers are employed as a substitute for the softmax layer in deep learning. The proposed model is tested on a widely used breast image database called MIAS dataset for experimental purposes and achieved 99% accuracy.Keywords: breast cancer, DCNN, KNN, mammography
Procedia PDF Downloads 1362581 Characterization of Volatiles Botrytis cinerea in Blueberry Using Solid Phase Micro Extraction, Gas Chromatography Mass Spectrometry
Authors: Ahmed Auda, Manjree Agarwala, Giles Hardya, Yonglin Rena
Abstract:
Botrytis cinerea is a major pest for many plants. It can attack a wide range of plant parts. It can attack buds, flowers, and leaves, stems, and fruit. However, B. cinerea can be mixed with other diseases that cause the same damage. There are many species of botrytis and more than one different strains of each. Botrytis might infect the foliage of nursery stock stored through winter in damp conditions. There are no known resistant plants. Botrytis must have nutrients or food source before it infests the plant. Nutrients leaking from wounded plant parts or dying tissue like old flower petals give the required nutrients. From this food, the fungus becomes more attackers and invades healthy tissue. Dark to light brown rot forms in the ill tissue. High humidity conditions support the growth of this fungus. However, we suppose that selection pressure can act on the morphological and neurophysiologic filter properties of the receiver and on both the biochemical and the physiological regulation of the signal. Communication is implied when signal and receiver evolves toward more and more specific matching, culminating. In other hand, receivers respond to portions of a body odor bouquet which is released to the environment not as an (intentional) signal but as an unavoidable consequence of metabolic activity or tissue damage. Each year Botrytis species can cause considerable economic losses to plant crops. Even with the application of strict quarantine and control measures, these fungi can still find their way into crops and cause the imposition of onerous restrictions on exports. Blueberry fruit mould caused by a fungal infection usually results in major losses during post-harvest storage. Therefore, the management of infection in early stages of disease development is necessary to minimize losses. The overall purpose of this study will develop sensitive, cheap, quick and robust diagnostic techniques for the detection of B. cinerea in blueberry. The specific aim was designed to investigate the performance of volatile organic compounds (VOCs) in the detection and discrimination of blueberry fruits infected by fungal pathogens with an emphasis on Botrytis in the early storage stage of post-harvest.Keywords: botrytis cinerea, blueberry, GC/MS, VOCs
Procedia PDF Downloads 2412580 Parallel Hybrid Honeypot and IDS Architecture to Detect Network Attacks
Authors: Hafiz Gulfam Ahmad, Chuangdong Li, Zeeshan Ahmad
Abstract:
In this paper, we proposed a parallel IDS and honeypot based approach to detect and analyze the unknown and known attack taxonomy for improving the IDS performance and protecting the network from intruders. The main theme of our approach is to record and analyze the intruder activities by using both the low and high interaction honeypots. Our architecture aims to achieve the required goals by combing signature based IDS, honeypots and generate the new signatures. The paper describes the basic component, design and implementation of this approach and also demonstrates the effectiveness of this approach reducing the probability of network attacks.Keywords: network security, intrusion detection, honeypot, snort, nmap
Procedia PDF Downloads 5682579 User Requirements Analysis for the Development of Assistive Navigation Mobile Apps for Blind and Visually Impaired People
Authors: Paraskevi Theodorou, Apostolos Meliones
Abstract:
In the context of the development process of two assistive navigation mobile apps for blind and visually impaired people (BVI) an extensive qualitative analysis of the requirements of potential users has been conducted. The analysis was based on interviews with BVIs and aimed to elicit not only their needs with respect to autonomous navigation but also their preferences on specific features of the apps under development. The elicited requirements were structured into four main categories, namely, requirements concerning the capabilities, functionality and usability of the apps, as well as compatibility requirements with respect to other apps and services. The main categories were then further divided into nine sub-categories. This classification, along with its content, aims to become a useful tool for the researcher or the developer who is involved in the development of digital services for BVI.Keywords: accessibility, assistive mobile apps, blind and visually impaired people, user requirements analysis
Procedia PDF Downloads 1242578 A Deep Reinforcement Learning-Based Secure Framework against Adversarial Attacks in Power System
Authors: Arshia Aflaki, Hadis Karimipour, Anik Islam
Abstract:
Generative Adversarial Attacks (GAAs) threaten critical sectors, ranging from fingerprint recognition to industrial control systems. Existing Deep Learning (DL) algorithms are not robust enough against this kind of cyber-attack. As one of the most critical industries in the world, the power grid is not an exception. In this study, a Deep Reinforcement Learning-based (DRL) framework assisting the DL model to improve the robustness of the model against generative adversarial attacks is proposed. Real-world smart grid stability data, as an IIoT dataset, test our method and improves the classification accuracy of a deep learning model from around 57 percent to 96 percent.Keywords: generative adversarial attack, deep reinforcement learning, deep learning, IIoT, generative adversarial networks, power system
Procedia PDF Downloads 392577 Reliability Analysis of Heat Exchanger Cycle Using Non-Parametric Method
Authors: Apurv Kulkarni, Shreyas Badave, B. Rajiv
Abstract:
Non-parametric reliability technique is useful for assessment of reliability of systems for which failure rates are not available. This is useful when detection of malfunctioning of any component is the key purpose during ongoing operation of the system. The main purpose of the Heat Exchanger Cycle discussed in this paper is to provide hot water at a constant temperature for longer periods of time. In such a cycle, certain components play a crucial role and this paper presents an effective way to predict the malfunctioning of the components by determination of system reliability. The method discussed in the paper is feasible and this is clarified with the help of various test cases.Keywords: heat exchanger cycle, k-statistics, PID controller, system reliability
Procedia PDF Downloads 3902576 Detection and Identification of Antibiotic Resistant Bacteria Using Infra-Red-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have an important role in controlling illness associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global health-care problem. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing like disk diffusion are time-consuming and other method including E-test, genotyping are relatively expensive. Fourier transform infrared (FTIR) microscopy is rapid, safe, and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 550 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 85% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E. coli, FTIR, multivariate analysis, susceptibility
Procedia PDF Downloads 2652575 Analytical Study of Data Mining Techniques for Software Quality Assurance
Authors: Mariam Bibi, Rubab Mehboob, Mehreen Sirshar
Abstract:
Satisfying the customer requirements is the ultimate goal of producing or developing any product. The quality of the product is decided on the bases of the level of customer satisfaction. There are different techniques which have been reported during the survey which enhance the quality of the product through software defect prediction and by locating the missing software requirements. Some mining techniques were proposed to assess the individual performance indicators in collaborative environment to reduce errors at individual level. The basic intention is to produce a product with zero or few defects thereby producing a best product quality wise. In the analysis of survey the techniques like Genetic algorithm, artificial neural network, classification and clustering techniques and decision tree are studied. After analysis it has been discovered that these techniques contributed much to the improvement and enhancement of the quality of the product.Keywords: data mining, defect prediction, missing requirements, software quality
Procedia PDF Downloads 4682574 GynApp: A Mobile Application for the Organization and Control of Gynecological Studies
Authors: Betzabet García-Mendoza, Rocío Abascal-Mena
Abstract:
Breast and cervical cancer are among the leading causes of death of women in Mexico. The mortality rate for these diseases is alarming, even though there have been many campaigns for making people self-aware of the importance of conducting gynecological studies for a timely prevention and detection, these have not been enough. This paper presents a mobile application for organizing and controlling gynecological studies in order to help and boost women to take care of their bodies and health. The process of analyzing and designing the mobile application is presented, along with all the steps carried out by following a user-centered design methodology.Keywords: breast cancer, cervical cancer, gynecological mobile application, paper prototyping, storyboard, women health
Procedia PDF Downloads 3102573 Application of Zeolite Nanoparticles in Biomedical Optics
Authors: Vladimir Hovhannisyan, Chen Yuan Dong
Abstract:
Recently nanoparticles (NPs) have been introduced in biomedicine as effective agents for cancer-targeted drug delivery and noninvasive tissue imaging. The most important requirements to these agents are their non-toxicity, biocompatibility and stability. In view of these criteria, the zeolite (ZL) nanoparticles (NPs) may be considered as perfect candidates for biomedical applications. ZLs are crystalline aluminosilicates consisting of oxygen-sharing SiO4 and AlO4 tetrahedral groups united by common vertices in three-dimensional framework and containing pores with diameters from 0.3 to 1.2 nm. Generally, the behavior and physical properties of ZLs are studied by SEM, X-ray spectroscopy, and AFM, whereas optical spectroscopic and microscopic approaches are not effective enough, because of strong scattering in common ZL bulk materials and powders. The light scattering can be reduced by using of ZL NPs. ZL NPs have large external surface area, high dispersibility in both aqueous and organic solutions, high photo- and thermal stability, and exceptional ability to adsorb various molecules and atoms in their nanopores. In this report, using multiphoton microscopy and nonlinear spectroscopy, we investigate nonlinear optical properties of clinoptilolite type of ZL micro- and nanoparticles with average diameters of 2200 nm and 240 nm, correspondingly. Multiphoton imaging is achieved using a laser scanning microscope system (LSM 510 META, Zeiss, Germany) coupled to a femtosecond titanium:sapphire laser (repetition rate- 80 MHz, pulse duration-120 fs, radiation wavelength- 720-820 nm) (Tsunami, Spectra-Physics, CA). Two Zeiss, Plan-Neofluar objectives (air immersion 20×∕NA 0.5 and water immersion 40×∕NA 1.2) are used for imaging. For the detection of the nonlinear response, we use two detection channels with 380-400 nm and 435-700 nm spectral bandwidths. We demonstrate that ZL micro- and nanoparticles can produce nonlinear optical response under the near-infrared femtosecond laser excitation. The interaction of hypericine, chlorin e6 and other dyes with ZL NPs and their photodynamic activity is investigated. Particularly, multiphoton imaging shows that individual ZL NPs particles adsorb Zn-tetraporphyrin molecules, but do not adsorb fluorescein molecules. In addition, nonlinear spectral properties of ZL NPs in native biotissues are studied. Nonlinear microscopy and spectroscopy may open new perspectives in the research and application of ZL NP in biomedicine, and the results may help to introduce novel approaches into the clinical environment.Keywords: multiphoton microscopy, nanoparticles, nonlinear optics, zeolite
Procedia PDF Downloads 4172572 Identifying Promoters and Their Types Based on a Two-Layer Approach
Authors: Bin Liu
Abstract:
Prokaryotic promoter, consisted of two short DNA sequences located at in -35 and -10 positions, is responsible for controlling the initiation and expression of gene expression. Different types of promoters have different functions, and their consensus sequences are similar. In addition, their consensus sequences may be different for the same type of promoter, which poses difficulties for promoter identification. Unfortunately, all existing computational methods treat promoter identification as a binary classification task and can only identify whether a query sequence belongs to a specific promoter type. It is desired to develop computational methods for effectively identifying promoters and their types. Here, a two-layer predictor is proposed to try to deal with the problem. The first layer is designed to predict whether a given sequence is a promoter and the second layer predicts the type of promoter that is judged as a promoter. Meanwhile, we also analyze the importance of feature and sequence conversation in two aspects: promoter identification and promoter type identification. To the best knowledge of ours, it is the first computational predictor to detect promoters and their types.Keywords: promoter, promoter type, random forest, sequence information
Procedia PDF Downloads 1842571 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method
Authors: Lee Yan Nian
Abstract:
Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation
Procedia PDF Downloads 1232570 Oviposition Responses of the Malaria Mosquito Anopheles gambiae sensu stricto to Hay Infusion Volatiles in Laboratory Bioassays and Investigation of Volatile Detection Methods
Authors: Lynda K. Eneh, Okal N. Mike, Anna-Karin Borg-Karlson, Ulrike Fillinger, Jenny M. Lindh
Abstract:
The responses of individual gravid Anopheles gambiae sensu stricto (s.s.) to hay infusion volatiles were evaluated under laboratory conditions. Such infusions have long been known to be effective baits for monitoring mosquitoes that vector arboviral and filarial diseases but have previously not been tested for malaria vectors. Hay infusions were prepared by adding sun-dried Bermuda grass to lake water and leaving the mixture in a covered bucket for three days. The proportions of eggs laid by gravid An. gambiae s.s. in diluted (10%) and concentrated infusions ( ≥ 25%) was compared to that laid in lake water in two-choice egg-count bioassays. Furthermore, with the aim to develop a method that can be used to collect volatiles that influence the egg-laying behavior of malaria mosquitoes, different volatile trapping methods were investigated. Two different polymer-traps eluted using two different desorption methods and three parameters were investigated. Porapak®-Q traps and solvent desorption was compared to Tenax®-TA traps and thermal desorption. The parameters investigated were: collection time (1h vs. 20h), addition of salt (0.15 g/ml sodium chloride (NaCl) vs. no NaCl), and stirring the infusion (0 vs. 300 rpm). Sample analysis was with gas chromatography-mass spectrometry (GC-MS). An. gambiae s.s was ten times less likely to lay eggs in concentrated hay infusion than in lake water. The volatiles were best characterized by thermally desorbed Tenax traps, collected for 20 hours from infusion aliquots with sodium chloride added. Ten volatiles identified from headspace and previously indicated as putative oviposition semiochemicals for An. gambiae s.s. or confirmed semiochemicals for other mosquito species were tested in egg-count bioassays. Six of these (3-methylbutanol, phenol, 4-methylphenol, nonanal, indole and 3-methylindole), when added to lake water, were avoided for egg-laying when lake water was offered as the alternative in dual-choice egg count bioassays. These compounds likely contribute to the unfavorable oviposition responses towards hay infusions. This difference in oviposition response of different mosquito species should be considered when designing control measures.Keywords: Anopheles gambiae, oviposition behaviour, egg-count cage bioassays, hay infusions, volatile detection, semiochemicals
Procedia PDF Downloads 3502569 Barriers and Facilitators for Telehealth Use during Cervical Cancer Screening and Care: A Literature Review
Authors: Reuben Mugisha, Stella Bakibinga
Abstract:
The cervical cancer burden is a global threat, but more so in low income settings where more than 85% of mortality cases occur due to lack of sufficient screening programs. There is consequently a lack of early detection of cancer and precancerous cells among women. Studies show that 3% to 35% of deaths could have been avoided through early screening depending on prognosis, disease progression, environmental and lifestyle factors. In this study, a systematic literature review is undertaken to understand potential barriers and facilitators as documented in previous studies that focus on the application of telehealth in cervical cancer screening programs for early detection of cancer and precancerous cells. The study informs future studies especially those from low income settings about lessons learned from previous studies and how to be best prepared while planning to implement telehealth for cervical cancer screening. It further identifies the knowledge gaps in the research area and makes recommendations. Using a specified selection criterion, 15 different articles are analyzed based on the study’s aim, theory or conceptual framework used, method applied, study findings and conclusion. Results are then tabulated and presented thematically to better inform readers about emerging facts on barriers and facilitators to telehealth implementation as documented in the reviewed articles, and how they consequently lead to evidence informed conclusions that are relevant to telehealth implementation for cervical cancer screening. Preliminary findings of this study underscore that use of low cost mobile colposcope is an appealing option in cervical cancer screening, particularly when coupled with onsite treatment of suspicious lesions. These tools relay cervical images to the online databases for storage and retrieval, they permit integration of connected devices at the point of care to rapidly collect clinical data for further analysis of the prevalence of cervical dysplasia and cervical cancer. Results however reveal the need for population sensitization prior to use of mobile colposcopies among patients, standardization of mobile colposcopy programs across screening partners, sufficient logistics and good connectivity, experienced experts to review image cases at the point-of-care as important facilitators to the implementation of mobile colposcope as a telehealth cervical cancer screening mechanism.Keywords: cervical cancer screening, digital technology, hand-held colposcopy, knowledge-sharing
Procedia PDF Downloads 2212568 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 802567 The Role of Synthetic Data in Aerial Object Detection
Authors: Ava Dodd, Jonathan Adams
Abstract:
The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools, and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represents another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.Keywords: computer vision, machine learning, synthetic data, YOLOv4
Procedia PDF Downloads 2252566 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 252565 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame
Authors: Ardalan Sabamehr, Ashutosh Bagchi
Abstract:
Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform
Procedia PDF Downloads 2962564 Nanorods Based Dielectrophoresis for Protein Concentration and Immunoassay
Authors: Zhen Cao, Yu Zhu, Junxue Fu
Abstract:
Immunoassay, i.e., antigen-antibody reaction, is crucial for disease diagnostics. To achieve the adequate signal of the antigen protein detection, a large amount of sample and long incubation time is needed. However, the amount of protein is usually small at the early stage, which makes it difficult to detect. Unlike cells and DNAs, no valid chemical method exists for protein amplification. Thus, an alternative way to improve the signal is through particle manipulation techniques to concentrate proteins, among which dielectrophoresis (DEP) is an effective one. DEP is a technique that concentrates particles to the designated region through a force created by the gradient in a non-uniform electric field. Since DEP force is proportional to the cube of particle size and square of electric field gradient, it is relatively easy to capture larger particles such as cells. For smaller ones like proteins, a super high gradient is then required. In this work, three-dimensional Ag/SiO2 nanorods arrays, fabricated by an easy physical vapor deposition technique called as oblique angle deposition, have been integrated with a DEP device and created the field gradient as high as of 2.6×10²⁴ V²/m³. The nanorods based DEP device is able to enrich bovine serum albumin (BSA) protein by 1800-fold and the rate has reached 180-fold/s when only applying 5 V electric potential. Based on the above nanorods integrated DEP platform, an immunoassay of mouse immunoglobulin G (IgG) proteins has been performed. Briefly, specific antibodies are immobilized onto nanorods, then IgG proteins are concentrated and captured, and finally, the signal from fluorescence-labelled antibodies are detected. The limit of detection (LoD) is measured as 275.3 fg/mL (~1.8 fM), which is a 20,000-fold enhancement compared with identical assays performed on blank glass plates. Further, prostate-specific antigen (PSA), which is a cancer biomarker for diagnosis of prostate cancer after radical prostatectomy, is also quantified with a LoD as low as 2.6 pg/mL. The time to signal saturation has been significantly reduced to one minute. In summary, together with an easy nanorod fabrication and integration method, this nanorods based DEP platform has demonstrated highly sensitive immunoassay performance and thus poses great potentials in applications for early point-of-care diagnostics.Keywords: dielectrophoresis, immunoassay, oblique angle deposition, protein concentration
Procedia PDF Downloads 1032563 Threat Analysis: A Technical Review on Risk Assessment and Management of National Testing Service (NTS)
Authors: Beenish Urooj, Ubaid Ullah, Sidra Riasat
Abstract:
National Testing Service-Pakistan (NTS) is an agency in Pakistan that conducts student success appraisal examinations. In this research paper, we must present a security model for the NTS organization. The security model will depict certain security countermeasures for a better defense against certain types of breaches and system malware. We will provide a security roadmap, which will help the company to execute its further goals to maintain security standards and policies. We also covered multiple aspects in securing the environment of the organization. We introduced the processes, architecture, data classification, auditing approaches, survey responses, data handling, and also training and awareness of risk for the company. The primary contribution is the Risk Survey, based on the maturity model meant to assess and examine employee training and knowledge of risks in the company's activities.Keywords: NTS, risk assessment, threat factors, security, services
Procedia PDF Downloads 702562 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 5282561 Machine Learning Approach for Lateralization of Temporal Lobe Epilepsy
Authors: Samira-Sadat JamaliDinan, Haidar Almohri, Mohammad-Reza Nazem-Zadeh
Abstract:
Lateralization of temporal lobe epilepsy (TLE) is very important for positive surgical outcomes. We propose a machine learning framework to ultimately identify the epileptogenic hemisphere for temporal lobe epilepsy (TLE) cases using magnetoencephalography (MEG) coherence source imaging (CSI) and diffusion tensor imaging (DTI). Unlike most studies that use classification algorithms, we propose an effective clustering approach to distinguish between normal and TLE cases. We apply the famous Minkowski weighted K-Means (MWK-Means) technique as the clustering framework. To overcome the problem of poor initialization of K-Means, we use particle swarm optimization (PSO) to effectively select the initial centroids of clusters prior to applying MWK-Means. We demonstrate that compared to K-means and MWK-means independently, this approach is able to improve the result of a benchmark data set.Keywords: temporal lobe epilepsy, machine learning, clustering, magnetoencephalography
Procedia PDF Downloads 1562560 Continuous Improvement Programme as a Strategy for Technological Innovation in Developing Nations. Nigeria as a Case Study
Authors: Sefiu Adebowale Adewumi
Abstract:
Continuous improvement programme (CIP) adopts an approach to improve organizational performance with small incremental steps over time. In this approach, it is not the size of each step that is important, but the likelihood that the improvements will be ongoing. Many companies in developing nations are now complementing continuous improvement with innovation, which is the successful exploitation of new ideas. Focus area of CIP in the organization was in relation to the size of the organizations and also in relation to the generic classification of these organizations. Product quality was prevalent in the manufacturing industry while manpower training and retraining and marketing strategy were emphasized for improvement to be made in the service, transport and supply industries. However, focus on innovation in raw materials, process and methods are needed because these are the critical factors that influence product quality in the manufacturing industries.Keywords: continuous improvement programme, developing countries, generic classfications, technological innovation
Procedia PDF Downloads 1892559 Compression Strength of Treated Fine-Grained Soils with Epoxy or Cement
Authors: M. Mlhem
Abstract:
Geotechnical engineers face many problematic soils upon construction and they have the choice for replacing these soils with more appropriate soils or attempting to improve the engineering properties of the soil through a suitable soil stabilization technique. Mostly, improving soils is environmental, easier and more economical than other solutions. Stabilization soils technique is applied by introducing a cementing agent or by injecting a substance to fill the pore volume. Chemical stabilizers are divided into two groups: traditional agents such as cement or lime and non-traditional agents such as polymers. This paper studies the effect of epoxy additives on the compression strength of four types of soil and then compares with the effect of cement on the compression strength for the same soils. Overall, the epoxy additives are more effective in increasing the strength for different types of soils regardless its classification. On the other hand, there was no clear relation between studied parameters liquid limit, passing No.200, unit weight and between the strength of samples for different types of soils.Keywords: additives, clay, compression strength, epoxy, stabilization
Procedia PDF Downloads 1282558 Detection of Aflatoxin B1 Producing Aspergillus flavus Genes from Maize Feed Using Loop-Mediated Isothermal Amplification (LAMP) Technique
Authors: Sontana Mimapan, Phattarawadee Wattanasuntorn, Phanom Saijit
Abstract:
Aflatoxin contamination in maize, one of several agriculture crops grown for livestock feeding, is still a problem throughout the world mainly under hot and humid weather conditions like Thailand. In this study Aspergillus flavus (A. Flavus), the key fungus for aflatoxin production especially aflatoxin B1 (AFB1), isolated from naturally infected maize were identified and characterized according to colony morphology and PCR using ITS, Beta-tubulin and calmodulin genes. The strains were analysed for the presence of four aflatoxigenic biosynthesis genes in relation to their capability to produce AFB1, Ver1, Omt1, Nor1, and aflR. Aflatoxin production was then confirmed using immunoaffinity column technique. A loop-mediated isothermal amplification (LAMP) was applied as an innovative technique for rapid detection of target nucleic acid. The reaction condition was optimized at 65C for 60 min. and calcein flurescent reagent was added before amplification. The LAMP results showed clear differences between positive and negative reactions in end point analysis under daylight and UV light by the naked eye. In daylight, the samples with AFB1 producing A. Flavus genes developed a yellow to green color, but those without the genes retained the orange color. When excited with UV light, the positive samples become visible by bright green fluorescence. LAMP reactions were positive after addition of purified target DNA until dilutions of 10⁻⁶. The reaction products were then confirmed and visualized with 1% agarose gel electrophoresis. In this regards, 50 maize samples were collected from dairy farms and tested for the presence of four aflatoxigenic biosynthesis genes using LAMP technique. The results were positive in 18 samples (36%) but negative in 32 samples (64%). All of the samples were rechecked by PCR and the results were the same as LAMP, indicating 100% specificity. Additionally, when compared with the immunoaffinity column-based aflatoxin analysis, there was a significant correlation between LAMP results and aflatoxin analysis (r= 0.83, P < 0.05) which suggested that positive maize samples were likely to be a high- risk feed. In conclusion, the LAMP developed in this study can provide a simple and rapid approach for detecting AFB1 producing A. Flavus genes from maize and appeared to be a promising tool for the prediction of potential aflatoxigenic risk in livestock feedings.Keywords: Aflatoxin B1, Aspergillus flavus genes, maize, loop-mediated isothermal amplification
Procedia PDF Downloads 2402557 Challenges and Opportunities: One Stop Processing for the Automation of Indonesian Large-Scale Topographic Base Map Using Airborne LiDAR Data
Authors: Elyta Widyaningrum
Abstract:
The LiDAR data acquisition has been recognizable as one of the fastest solution to provide the basis data for topographic base mapping in Indonesia. The challenges to accelerate the provision of large-scale topographic base maps as a development plan basis gives the opportunity to implement the automated scheme in the map production process. The one stop processing will also contribute to accelerate the map provision especially to conform with the Indonesian fundamental spatial data catalog derived from ISO 19110 and geospatial database integration. Thus, the automated LiDAR classification, DTM generation and feature extraction will be conducted in one GIS-software environment to form all layers of topographic base maps. The quality of automated topographic base map will be assessed and analyzed based on its completeness, correctness, contiguity, consistency and possible customization.Keywords: automation, GIS environment, LiDAR processing, map quality
Procedia PDF Downloads 3682556 Human Errors in IT Services, HFACS Model in Root Cause Categorization
Authors: Kari Saarelainen, Marko Jantti
Abstract:
IT service trending of root causes of service incidents and problems is an important part of proactive problem management and service improvement. Human error related root causes are an important root cause category also in IT service management, although it’s proportion among root causes is smaller than in the other industries. The research problem in this study is: How root causes of incidents related to human errors should be categorized in an ITSM organization to effectively support service improvement. Categorization based on IT service management processes and based on Human Factors Analysis and Classification System (HFACS) taxonomy was studied in a case study. HFACS is widely used in human error root cause categorization across many industries. Combining these two categorization models in a two dimensional matrix was found effective, yet impractical for daily work.Keywords: IT service management, ITIL, incident, problem, HFACS, swiss cheese model
Procedia PDF Downloads 4902555 Function Approximation with Radial Basis Function Neural Networks via FIR Filter
Authors: Kyu Chul Lee, Sung Hyun Yoo, Choon Ki Ahn, Myo Taeg Lim
Abstract:
Recent experimental evidences have shown that because of a fast convergence and a nice accuracy, neural networks training via extended Kalman filter (EKF) method is widely applied. However, as to an uncertainty of the system dynamics or modeling error, the performance of the method is unreliable. In order to overcome this problem in this paper, a new finite impulse response (FIR) filter based learning algorithm is proposed to train radial basis function neural networks (RBFN) for nonlinear function approximation. Compared to the EKF training method, the proposed FIR filter training method is more robust to those environmental conditions. Furthermore, the number of centers will be considered since it affects the performance of approximation.Keywords: extended Kalman filter, classification problem, radial basis function networks (RBFN), finite impulse response (FIR) filter
Procedia PDF Downloads 4572554 Using Machine Learning to Monitor the Condition of the Cutting Edge during Milling Hardened Steel
Authors: Pawel Twardowski, Maciej Tabaszewski, Jakub Czyżycki
Abstract:
The main goal of the work was to use machine learning to predict cutting-edge wear. The research was carried out while milling hardened steel with sintered carbide cutters at various cutting speeds. During the tests, cutting-edge wear was measured, and vibration acceleration signals were also measured. Appropriate measures were determined from the vibration signals and served as input data in the machine-learning process. Two approaches were used in this work. The first one involved a two-state classification of the cutting edge - suitable and unfit for further work. In the second approach, prediction of the cutting-edge state based on vibration signals was used. The obtained research results show that the appropriate use of machine learning algorithms gives excellent results related to monitoring cutting edge during the process.Keywords: milling of hardened steel, tool wear, vibrations, machine learning
Procedia PDF Downloads 60