Search results for: method detection limit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22180

Search results for: method detection limit

21070 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods

Authors: Cristina Vatamanu, Doina Cosovan, Dragos Gavrilut, Henri Luchian

Abstract:

In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through semi-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.

Keywords: ensembles, false positives, feature selection, one side class algorithm

Procedia PDF Downloads 292
21069 Static Study of Piezoelectric Bimorph Beams with Delamination Zone

Authors: Zemirline Adel, Ouali Mohammed, Mahieddine Ali

Abstract:

The FOSDT (First Order Shear Deformation Theory) is taking into consideration to study the static behavior of a bimorph beam, with a delamination zone between the upper and the lower layer. The effect of limit conditions and lengths of the delamination zone are presented in this paper, with a PVDF piezoelectric material application. A FEM “Finite Element Method” is used to discretize the beam. In the axial displacement, a displacement field appears in the debonded zone with inverse effect between the upper and the lower layer was observed.

Keywords: static, piezoelectricity, beam, delamination

Procedia PDF Downloads 418
21068 OFDM Radar for Detecting a Rayleigh Fluctuating Target in Gaussian Noise

Authors: Mahboobeh Eghtesad, Reza Mohseni

Abstract:

We develop methods for detecting a target for orthogonal frequency division multiplexing (OFDM) based radars. As a preliminary step we introduce the target and Gaussian noise models in discrete–time form. Then, resorting to match filter (MF) we derive a detector for two different scenarios: a non-fluctuating target and a Rayleigh fluctuating target. It will be shown that a MF is not suitable for Rayleigh fluctuating targets. In this paper we propose a reduced-complexity method based on fast Fourier transfrom (FFT) for such a situation. The proposed method has better detection performance.

Keywords: constant false alarm rate (CFAR), match filter (MF), fast Fourier transform (FFT), OFDM radars, Rayleigh fluctuating target

Procedia PDF Downloads 363
21067 Thermomechanical Processing of a CuZnAl Shape-Memory Alloy

Authors: Pedro Henrique Alves Martins, Paulo Guilherme Ferreira De Siqueira, Franco De Castro Bubani, Maria Teresa Paulino Aguilar, Paulo Roberto Cetlin

Abstract:

Cu-base shape-memory alloys (CuZnAl, CuAlNi, CuAlBe, etc.) are promising engineering materials for several unconventional devices, such as sensors, actuators, and mechanical vibration dampers. Brittleness is one of the factors that limit the commercial use of these alloys, as it makes thermomechanical processing difficult. In this work, a method for the hot extrusion of a 75.50% Cu, 16,74% Zn, 7,76% Al (weight %) alloy is presented. The effects of the thermomechanical processing in the microstructure and the pseudoelastic behavior of the alloy are assessed by optical metallography, compression and hardness tests. Results show that hot extrusion is a suitable method to obtain severe cross-section reductions in the CuZnAl shape-memory alloy studied. The alloy maintained its pseudoelastic effect after the extrusion and the modifications in the mechanical behavior caused by precipitation during hot extrusion can be minimized by a suitable precipitate dissolution heat treatment.

Keywords: hot extrusion, pseudoelastic, shape-memory alloy, thermomechanical processing

Procedia PDF Downloads 374
21066 A Tool to Measure Efficiency and Trust Towards eXplainable Artificial Intelligence in Conflict Detection Tasks

Authors: Raphael Tuor, Denis Lalanne

Abstract:

The ATM research community is missing suitable tools to design, test, and validate new UI prototypes. Important stakes underline the implementation of both DSS and XAI methods into current systems. ML-based DSS are gaining in relevance as ATFM becomes increasingly complex. However, these systems only prove useful if a human can understand them, and thus new XAI methods are needed. The human-machine dyad should work as a team and should understand each other. We present xSky, a configurable benchmark tool that allows us to compare different versions of an ATC interface in conflict detection tasks. Our main contributions to the ATC research community are (1) a conflict detection task simulator (xSky) that allows to test the applicability of visual prototypes on scenarios of varying difficulty and outputting relevant operational metrics (2) a theoretical approach to the explanations of AI-driven trajectory predictions. xSky addresses several issues that were identified within available research tools. Researchers can configure the dimensions affecting scenario difficulty with a simple CSV file. Both the content and appearance of the XAI elements can be customized in a few steps. As a proof-of-concept, we implemented an XAI prototype inspired by the maritime field.

Keywords: air traffic control, air traffic simulation, conflict detection, explainable artificial intelligence, explainability, human-automation collaboration, human factors, information visualization, interpretability, trajectory prediction

Procedia PDF Downloads 160
21065 Height of Highway Embankment for Tolerable Residual Settlement of Loose Cohesionless Subsoil Overlain by Stronger Soil

Authors: Sharifullah Ahmed

Abstract:

Residual settlement of cohesionless or non-plastic soil of different strength underlying highway embankment overlain by stronger soil layer highway embankment is studied. A parametric study is carried out for different height of embankment and for different ESAL factor. The sum of elastic settlements of cohesionless subsoil due to axle induced stress and due to self-weight of pavement layers is termed as the residual settlement. The values of residual settlement (Sr) for different heights of road embankment (He) are obtained and presented as design charts for different SPT Value (N60) and ESAL factor. For rigid pavement and flexible pavement in approach to bridge or culvert, the tolerable residual settlement is 0.100m. This limit is taken as 0.200m for flexible pavement in general sections of highway without approach to bridge or culvert. A simplified guideline is developed for design of highway embankment underlain by very loose to loose cohesionless subsoil overlain by a stronger soil layer for limiting value of the residual settlement. In the current research study range of ESAL factor is 1-10 and range of SPT value (N60) is 1-10. That is found that, ground improvement is not required if the overlying stronger layer is minimum 1.5m and 4.0m for general road section of flexible pavement except bridge or culvert approach and for rigid pavement or flexible pavement in bridge or culvert approach. Tables and charts are included in the prepared guideline to obtain minimum allowable height of highway embankment to limit the residual settlement with in mentioned tolerable limit. Allowable values of the embankment height (He) are obtained corresponding to tolerable or limiting level of the residual settlement of loose subsoil for different SPT value, thickness of stronger layer (d) and ESAL factor. The developed guideline is may be issued to be used in assessment of the necessity of ground improvement in case of cohesionless subsoil underlying highway embankment overlain by stronger subsoil layer for limiting residual settlement. The ground improvement is only to be required if the residual settlement of subsoil is more than tolerable limit.

Keywords: axle pressure, equivalent single axle load, ground improvement, highway embankment, tolerable residual settlement

Procedia PDF Downloads 129
21064 Data Quality Enhancement with String Length Distribution

Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda

Abstract:

Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.

Keywords: string classification, data quality, feature selection, probability distribution, string length

Procedia PDF Downloads 319
21063 Detection of Pharmaceutical Personal Protective Equipment in Video Stream

Authors: Michael Leontiev, Danil Zhilikov, Dmitry Lobanov, Lenar Klimov, Vyacheslav Chertan, Daniel Bobrov, Vladislav Maslov, Vasilii Vologdin, Ksenia Balabaeva

Abstract:

Pharmaceutical manufacturing is a complex process, where each stage requires a high level of safety and sterility. Personal Protective Equipment (PPE) is used for this purpose. Despite all the measures of control, the human factor (improper PPE wearing) causes numerous losses to human health and material property. This research proposes a solid computer vision system for ensuring safety in pharmaceutical laboratories. For this, we have tested a wide range of state-of-the-art object detection methods. Composing previously obtained results in this sphere with our own approach to this problem, we have reached a high accuracy ([email protected]) ranging from 0.77 up to 0.98 in detecting all the elements of a common set of PPE used in pharmaceutical laboratories. Our system is a step towards safe medicine production.

Keywords: sterility and safety in pharmaceutical development, personal protective equipment, computer vision, object detection, monitoring in pharmaceutical development, PPE

Procedia PDF Downloads 89
21062 Computer-Aided Classification of Liver Lesions Using Contrasting Features Difference

Authors: Hussein Alahmer, Amr Ahmed

Abstract:

Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate.  This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.

Keywords: CAD system, difference of feature, fuzzy c means, lesion detection, liver segmentation

Procedia PDF Downloads 326
21061 Evaluation of Antimicrobial Susceptibility Profile of Urinary Tract Infections in Massoud Medical Laboratory: 2018-2021

Authors: Ali Ghorbanipour

Abstract:

The aim of this study is to investigate the drug resistance pattern and the value of the MIC (minimum inhibitory concentration)method to reduce the impact of infectious diseases and the slow development of resistance. Method: The study was conducted on clinical specimens collected between 2018 to 2021. identification of isolates and antibiotic susceptibility testing were performed using conventional biochemical tests. Antibiotic resistance was determined using kibry-Bauer disk diffusion and MIC by E-test methods comparative with microdilution plate elisa method. Results were interpreted according to CLSI. Results: Out of 249600 different clinical specimens, 18720 different pathogenic bacteria by overall detection ratio 7.7% were detected. Among pathogen bacterial were Gram negative bacteria (70%,n=13000) and Gram positive bacteria(30%,n=5720).Medically relevant gram-negative bacteria include a multitude of species such as E.coli , Klebsiella .spp , Pseudomonas .aeroginosa , Acinetobacter .spp , Enterobacterspp ,and gram positive bacteria Staphylococcus.spp , Enterococcus .spp , Streptococcus .spp was isolated . Conclusion: Our results highlighted that the resistance ratio among Gram Negative bacteria and Gram positive bacteria with different infection is high it suggest constant screening and follow-up programs for the detection of antibiotic resistance and the value of MIC drug susceptibility reporting that provide a new way to the usage of resistant antibiotic in combination with other antibiotics or accurate weight of antibiotics that inhibit or kill bacteria. Evaluation of wrong medication in the expansion of resistance and side effects of over usage antibiotics are goals. Ali ghorbanipour presently working as a supervision at the microbiology department of Massoud medical laboratory. Iran. Earlier, he worked as head department of pulmonary infection in firoozgarhospital, Iran. He received master degree in 2012 from Fergusson College. His research prime objective is a biologic wound dressing .to his credit, he has Published10 articles in various international congresses by presenting posters.

Keywords: antimicrobial profile, MIC & MBC Method, microplate antimicrobial assay, E-test

Procedia PDF Downloads 134
21060 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 140
21059 Application of Liquid Chromatographic Method for the in vitro Determination of Gastric and Intestinal Stability of Pure Andrographolide in the Extract of Andrographis paniculata

Authors: Vijay R. Patil, Sathiyanarayanan Lohidasan, K. R. Mahadik

Abstract:

Gastrointestinal stability of andrographolide was evaluated in vitro in simulated gastric (SGF) and intestinal (SIF) fluids using a validated HPLC-PDA method. The method was validated using a 5μm ThermoHypersil GOLD C18column (250 mm × 4.0 mm) and mobile phase consisting of water: acetonitrile; 70: 30 (v/v) delivered isocratically at a flow rate of 1 mL/min with UV detection at 228 nm. Andrographolide in pure form and extract Andrographis paniculata was incubated at 37°C in an incubator shaker in USP simulated gastric and intestinal fluids with and without enzymes. Systematic protocol as per FDA Guidance System was followed for stability study and samples were assayed at 0, 15, 30 and 60 min intervals for gastric and at 0, 15, 30, 60 min, 1, 2 and 3 h for intestinal stability study. Also, the stability study was performed up to 24 h to see the degradation pattern in SGF and SIF (with enzyme and without enzyme). The developed method was found to be accurate, precise and robust. Andrographolide was found to be stable in SGF (pH ∼ 1.2) for 1h and SIF (pH 6.8) up to 3 h. The relative difference (RD) of amount of drug added and found at all time points was found to be < 3%. The present study suggests that drug loss in the gastrointestinal tract takes place may be by membrane permeation rather than a degradation process.

Keywords: andrographolide, Andrographis paniculata, in vitro, stability, gastric, Intestinal HPLC-PDA

Procedia PDF Downloads 243
21058 DISGAN: Efficient Generative Adversarial Network-Based Method for Cyber-Intrusion Detection

Authors: Hongyu Chen, Li Jiang

Abstract:

Ubiquitous anomalies endanger the security of our system con- stantly. They may bring irreversible damages to the system and cause leakage of privacy. Thus, it is of vital importance to promptly detect these anomalies. Traditional supervised methods such as Decision Trees and Support Vector Machine (SVM) are used to classify normality and abnormality. However, in some case, the abnormal status are largely rarer than normal status, which leads to decision bias of these methods. Generative adversarial network (GAN) has been proposed to handle the case. With its strong generative ability, it only needs to learn the distribution of normal status, and identify the abnormal status through the gap between it and the learned distribution. Nevertheless, existing GAN-based models are not suitable to process data with discrete values, leading to immense degradation of detection performance. To cope with the discrete features, in this paper, we propose an efficient GAN-based model with specifically-designed loss function. Experiment results show that our model outperforms state-of-the-art models on discrete dataset and remarkably reduce the overhead.

Keywords: GAN, discrete feature, Wasserstein distance, multiple intermediate layers

Procedia PDF Downloads 129
21057 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 118
21056 Grating Scale Thermal Expansion Error Compensation for Large Machine Tools Based on Multiple Temperature Detection

Authors: Wenlong Feng, Zhenchun Du, Jianguo Yang

Abstract:

To decrease the grating scale thermal expansion error, a novel method which based on multiple temperature detections is proposed. Several temperature sensors are installed on the grating scale and the temperatures of these sensors are recorded. The temperatures of every point on the grating scale are calculated by interpolating between adjacent sensors. According to the thermal expansion principle, the grating scale thermal expansion error model can be established by doing the integral for the variations of position and temperature. A novel compensation method is proposed in this paper. By applying the established error model, the grating scale thermal expansion error is decreased by 90% compared with no compensation. The residual positioning error of the grating scale is less than 15um/10m and the accuracy of the machine tool is significant improved.

Keywords: thermal expansion error of grating scale, error compensation, machine tools, integral method

Procedia PDF Downloads 368
21055 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: piecewise regression, bayesian, reversible jump MCMC, segmentation

Procedia PDF Downloads 373
21054 Application of Artificial Neural Network in Assessing Fill Slope Stability

Authors: An-Jui. Li, Kelvin Lim, Chien-Kuo Chiu, Benson Hsiung

Abstract:

This paper details the utilization of artificial intelligence (AI) in the field of slope stability whereby quick and convenient solutions can be obtained using the developed tool. The AI tool used in this study is the artificial neural network (ANN), while the slope stability analysis methods are the finite element limit analysis methods. The developed tool allows for the prompt prediction of the safety factors of fill slopes and their corresponding probability of failure (depending on the degree of variation of the soil parameters), which can give the practicing engineer a reasonable basis in their decision making. In fact, the successful use of the Extreme Learning Machine (ELM) algorithm shows that slope stability analysis is no longer confined to the conventional methods of modeling, which at times may be tedious and repetitive during the preliminary design stage where the focus is more on cost saving options rather than detailed design. Therefore, similar ANN-based tools can be further developed to assist engineers in this aspect.

Keywords: landslide, limit analysis, artificial neural network, soil properties

Procedia PDF Downloads 209
21053 Traverse Surveying Table Simple and Sure

Authors: Hamid Fallah

Abstract:

Creating surveying stations is the first thing that a surveyor learns; they can use it for control and implementation in projects such as buildings, roads, tunnels, monitoring, etc., whatever is related to the preparation of maps. In this article, the method of calculation through the traverse table and by checking several examples of errors of several publishers of surveying books in the calculations of this table, we also control the results of several software in a simple way. Surveyors measure angles and lengths in creating surveying stations, so the most important task of a surveyor is to be able to correctly remove the error of angles and lengths from the calculations and to determine whether the amount of error is within the permissible limit for delete it or not.

Keywords: UTM, localization, scale factor, cartesian, traverse

Procedia PDF Downloads 83
21052 TiO₂ Nanotube Array Based Selective Vapor Sensors for Breath Analysis

Authors: Arnab Hazra

Abstract:

Breath analysis is a quick, noninvasive and inexpensive technique for disease diagnosis can be used on people of all ages without any risk. Only a limited number of volatile organic compounds (VOCs) can be associated with the occurrence of specific diseases. These VOCs can be considered as disease markers or breath markers. Selective detection with specific concentration of breath marker in exhaled human breath is required to detect a particular disease. For example, acetone (C₃H₆O), ethanol (C₂H₅OH), ethane (C₂H₆) etc. are the breath markers and abnormal concentrations of these VOCs in exhaled human breath indicates the diseases like diabetes mellitus, renal failure, breast cancer respectively. Nanomaterial-based vapor sensors are inexpensive, small and potential candidate for the detection of breath markers. In practical measurement, selectivity is the most crucial issue where trace detection of breath marker is needed to identify accurately in the presence of several interfering vapors and gases. Current article concerns a novel technique for selective and lower ppb level detection of breath markers at very low temperature based on TiO₂ nanotube array based vapor sensor devices. Highly ordered and oriented TiO₂ nanotube array was synthesized by electrochemical anodization of high purity tatinium (Ti) foil. 0.5 wt% NH₄F, ethylene glycol and 10 vol% H₂O was used as the electrolyte and anodization was carried out for 90 min with 40 V DC potential. Au/TiO₂ Nanotube/Ti, sandwich type sensor device was fabricated for the selective detection of VOCs in low concentration range. Initially, sensor was characterized where resistive and capacitive change of the sensor was recorded within the valid concentration range for individual breath markers (or organic vapors). Sensor resistance was decreased and sensor capacitance was increased with the increase of vapor concentration. Now, the ratio of resistive slope (mR) and capacitive slope (mC) provided a concentration independent constant term (M) for a particular vapor. For the detection of unknown vapor, ratio of resistive change and capacitive change at any concentration was same to the previously calculated constant term (M). After successful identification of the target vapor, concentration was calculated from the straight line behavior of resistance as a function of concentration. Current technique is suitable for the detection of particular vapor from a mixture of other interfering vapors.

Keywords: breath marker, vapor sensors, selective detection, TiO₂ nanotube array

Procedia PDF Downloads 156
21051 Zinc Nanoparticles Modified Electrode as an Insulin Sensor

Authors: Radka Gorejova, Ivana Sisolakova, Jana Shepa, Frederika Chovancova, Renata Orinakova

Abstract:

Diabetes mellitus (DM) is a serious metabolic disease characterized by chronic hyperglycemia. Often, the symptoms are not sufficiently observable at early stages, and so hyperglycemia causes pathological and functional changes before the diagnosis of the DM. Therefore, the development of an electrochemical sensor that will be fast, accurate, and instrumentally undemanding is currently needful. Screen-printed carbon electrodes (SPCEs) can be considered as the most suitable matrix material for insulin sensors because of the small size of the working electrode. It leads to the analyst's volume reduction to only 50 µl for each measurement. The surface of bare SPCE was modified by a combination of chitosan, multi-walled carbon nanotubes (MWCNTs), and zinc nanoparticles (ZnNPs) to obtain better electrocatalytic activity towards insulin oxidation. ZnNPs were electrochemically deposited on the chitosan-MWCNTs/SPCE surface using the pulse deposition method. Thereafter, insulin was determined on the prepared electrode using chronoamperometry and electrochemical impedance spectroscopy (EIS). The chronoamperometric measurement was performed by adding a constant amount of insulin in 0.1 M NaOH and PBS (2 μl) with the concentration of 2 μM, and the current response of the system was monitored after a gradual increase in concentration. Subsequently, the limit of detection (LOD) of the prepared electrode was determined via the Randles-Ševčík equation. The LOD was 0.47 µM. Prepared electrodes were studied also as the impedimetric sensors for insulin determination. Therefore, various insulin concentrations were determined via EIS. Based on the performed measurements, the ZnNPs/chitosan-MWCNTs/SPCE can be considered as a potential candidate for novel electrochemical sensor for insulin determination. Acknowledgments: This work has been supported by the projects Visegradfund project number 22020140, VEGA 1/0095/21 of the Slovak Scientific Grant Agency, and APVV-PP-COVID-20-0036 of the Slovak Research and Development Agency.

Keywords: zinc nanoparticles, insulin, chronoamperometry, electrochemical impedance spectroscopy

Procedia PDF Downloads 122
21050 Socratic Style of Teaching: An Analysis of Dialectical Method

Authors: Muhammad Jawwad, Riffat Iqbal

Abstract:

The Socratic method, also known as the dialectical method and elenctic method, has significant relevance in the contemporary educational system. It can be incorporated into modern-day educational systems theoretically as well as practically. Being interactive and dialogue-based in nature, this teaching approach is followed by critical thinking and innovation. The pragmatic value of the Dialectical Method has been discussed in this article, and the limitations of the Socratic method have also been highlighted. The interactive Method of Socrates can be used in many subjects for students of different grades. The Limitations and delimitations of the Method have also been discussed for its proper implementation. This article has attempted to elaborate and analyze the teaching method of Socrates with all its pre-suppositions and Epistemological character.

Keywords: Socratic method, dialectical method, knowledge, teaching, virtue

Procedia PDF Downloads 135
21049 Fault Detection and Isolation in Sensors and Actuators of Wind Turbines

Authors: Shahrokh Barati, Reza Ramezani

Abstract:

Due to the countries growing attention to the renewable energy producing, the demand for energy from renewable energy has gone up among the renewable energy sources; wind energy is the fastest growth in recent years. In this regard, in order to increase the availability of wind turbines, using of Fault Detection and Isolation (FDI) system is necessary. Wind turbines include of various faults such as sensors fault, actuator faults, network connection fault, mechanical faults and faults in the generator subsystem. Although, sensors and actuators have a large number of faults in wind turbine but have discussed fewer in the literature. Therefore, in this work, we focus our attention to design a sensor and actuator fault detection and isolation algorithm and Fault-tolerant control systems (FTCS) for Wind Turbine. The aim of this research is to propose a comprehensive fault detection and isolation system for sensors and actuators of wind turbine based on data-driven approaches. To achieve this goal, the features of measurable signals in real wind turbine extract in any condition. The next step is the feature selection among the extract in any condition. The next step is the feature selection among the extracted features. Features are selected that led to maximum separation networks that implemented in parallel and results of classifiers fused together. In order to maximize the reliability of decision on fault, the property of fault repeatability is used.

Keywords: FDI, wind turbines, sensors and actuators faults, renewable energy

Procedia PDF Downloads 401
21048 Estimating Knowledge Flow Patterns of Business Method Patents with a Hidden Markov Model

Authors: Yoonjung An, Yongtae Park

Abstract:

Knowledge flows are a critical source of faster technological progress and stouter economic growth. Knowledge flows have been accelerated dramatically with the establishment of a patent system in which each patent is required by law to disclose sufficient technical information for the invention to be recreated. Patent analysis, thus, has been widely used to help investigate technological knowledge flows. However, the existing research is limited in terms of both subject and approach. Particularly, in most of the previous studies, business method (BM) patents were not covered although they are important drivers of knowledge flows as other patents. In addition, these studies usually focus on the static analysis of knowledge flows. Some use approaches that incorporate the time dimension, yet they still fail to trace a true dynamic process of knowledge flows. Therefore, we investigate dynamic patterns of knowledge flows driven by BM patents using a Hidden Markov Model (HMM). An HMM is a popular statistical tool for modeling a wide range of time series data, with no general theoretical limit in regard to statistical pattern classification. Accordingly, it enables characterizing knowledge patterns that may differ by patent, sector, country and so on. We run the model in sets of backward citations and forward citations to compare the patterns of knowledge utilization and knowledge dissemination.

Keywords: business method patents, dynamic pattern, Hidden-Markov Model, knowledge flow

Procedia PDF Downloads 329
21047 Ecological Risk Assessment of Informal E-Waste Processing in Alaba International Market, Lagos, Nigeria

Authors: A. A. Adebayo, O. Osibanjo

Abstract:

Informal electronic waste (e-waste) processing is a crude method of recycling, which is on the increase in Nigeria. The release of hazardous substances such as heavy metals (HMs) into the environment during informal e-waste processing has been a major concern. However, there is insufficient information on environmental contamination from e-waste recycling, associated ecological risk in Alaba International Market, a major electronic market in Lagos, Nigeria. The aims of this study were to determine the levels of HMs in soil, resulting from the e-waste recycling; and also assess associated ecological risks in Alaba international market. Samples of soils (334) were randomly collected seasonally for three years from fourteen selected e-waste activity points and two control sites. The samples were digested using standard methods and HMs analysed by inductive coupled plasma optical emission. Ecological risk was estimated using Ecological Risk index (ER), Potential Ecological Risk index (RI), Index of geoaccumulation (Igeo), Contamination factor (Cf) and degree of contamination factor (Cdeg). The concentrations range of HMs (mg/kg) in soil were: 16.7-11200.0 (Pb); 14.3-22600.0 (Cu); 1.90-6280.0 (Ni), 39.5-4570.0 (Zn); 0.79-12300.0 (Sn); 0.02-138.0 (Cd); 12.7-1710.0 (Ba); 0.18-131.0 (Cr); 0.07-28.0 (V), while As was below detection limit. Concentrations range in control soils were 1.36-9.70 (Pb), 2.06-7.60 (Cu), 1.25-5.11 (Ni), 3.62-15.9 (Zn), BDL-0.56 (Sn), BDL-0.01 (Cd), 14.6-47.6 (Ba), 0.21–12.2 (Cr) and 0.22-22.2 (V). The trend in ecological risk index was in the order Cu > Pb > Ni > Zn > Cr > Cd > Ba > V. The potential ecological risk index with respect to informal e-waste activities were: burning > dismantling > disposal > stockpiling. The index of geo accumulation indices revealed that soils were extremely polluted with Cd, Cu, Pb, Zn and Ni. The contamination factor indicated that 93% of the studied areas have very high contamination status for Pb, Cu, Ba, Sn and Co while Cr and Cd were in the moderately contaminated status. The degree of contamination decreased in the order of Sn > Cu > Pb >> Zn > Ba > Co > Ni > V > Cr > Cd. Heavy metal contamination of Alaba international market environment resulting from informal e-waste processing was established. Proper management of e-waste and remediation of the market environment are recommended to minimize the ecological risks.

Keywords: Alaba international market, ecological risk, electronic waste, heavy metal contamination

Procedia PDF Downloads 198
21046 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms

Authors: Rikson Gultom

Abstract:

Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.

Keywords: abusive language, hate speech, machine learning, optimization, social media

Procedia PDF Downloads 129
21045 The Journey of a Malicious HTTP Request

Authors: M. Mansouri, P. Jaklitsch, E. Teiniker

Abstract:

SQL injection on web applications is a very popular kind of attack. There are mechanisms such as intrusion detection systems in order to detect this attack. These strategies often rely on techniques implemented at high layers of the application but do not consider the low level of system calls. The problem of only considering the high level perspective is that an attacker can circumvent the detection tools using certain techniques such as URL encoding. One technique currently used for detecting low-level attacks on privileged processes is the tracing of system calls. System calls act as a single gate to the Operating System (OS) kernel; they allow catching the critical data at an appropriate level of detail. Our basic assumption is that any type of application, be it a system service, utility program or Web application, “speaks” the language of system calls when having a conversation with the OS kernel. At this level we can see the actual attack while it is happening. We conduct an experiment in order to demonstrate the suitability of system call analysis for detecting SQL injection. We are able to detect the attack. Therefore we conclude that system calls are not only powerful in detecting low-level attacks but that they also enable us to detect high-level attacks such as SQL injection.

Keywords: Linux system calls, web attack detection, interception, SQL

Procedia PDF Downloads 359
21044 Ultra-High Frequency Passive Radar Coverage for Cars Detection in Semi-Urban Scenarios

Authors: Pedro Gómez-del-Hoyo, Jose-Luis Bárcena-Humanes, Nerea del-Rey-Maestre, María-Pilar Jarabo-Amores, David Mata-Moya

Abstract:

A study of achievable coverages using passive radar systems in terrestrial traffic monitoring applications is presented. The study includes the estimation of the bistatic radar cross section of different commercial vehicle models that provide challenging low values which make detection really difficult. A semi-urban scenario is selected to evaluate the impact of excess propagation losses generated by an irregular relief. A bistatic passive radar exploiting UHF frequencies radiated by digital video broadcasting transmitters is assumed. A general method of coverage estimation using electromagnetic simulators in combination with estimated car average bistatic radar cross section is applied. In order to reduce the computational cost, hybrid solution is implemented, assuming free space for the target-receiver path but estimating the excess propagation losses for the transmitter-target one.

Keywords: bistatic radar cross section, passive radar, propagation losses, radar coverage

Procedia PDF Downloads 337
21043 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 131
21042 Determination of Nanomolar Mercury (II) by Using Multi-Walled Carbon Nanotubes Modified Carbon Zinc/Aluminum Layered Double Hydroxide – 3 (4-Methoxyphenyl) Propionate Nanocomposite Paste Electrode

Authors: Illyas Md Isa, Sharifah Norain Mohd Sharif, Norhayati Hashima

Abstract:

A mercury(II) sensor was developed by using multi-walled carbon nanotubes (MWCNTs) paste electrode modified with Zn/Al layered double hydroxide-3(4-methoxyphenyl)propionate nanocomposite (Zn/Al-HMPP). The optimum conditions by cyclic voltammetry were observed at electrode composition 2.5% (w/w) of Zn/Al-HMPP/MWCNTs, 0.4 M potassium chloride, pH 4.0, and scan rate of 100 mVs-1. The sensor exhibited wide linear range from 1x10-3 M to 1x10-7 M Hg2+ and 1x10-7 M to 1x10-9 M Hg2+, with a detection limit of 1x10-10 M Hg2+. The high sensitivity of the proposed electrode towards Hg(II) was confirmed by double potential-step chronocoulometry which indicated these values; diffusion coefficient 1.5445 x 10-9 cm2 s-1, surface charge 524.5 µC s-½ and surface coverage 4.41 x 10-2 mol cm-2. The presence of 25-fold concentration of most metal ions had no influence on the anodic peak current. With characteristics such as high sensitivity, selectivity and repeatability the electrode was then proposed as the appropriate alternative for the determination of mercury(II).

Keywords: cyclic voltammetry, mercury(II), modified carbon paste electrode, nanocomposite

Procedia PDF Downloads 310
21041 Diversity Indices as a Tool for Evaluating Quality of Water Ways

Authors: Khadra Ahmed, Khaled Kheireldin

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: planktons, diversity indices, water quality index, water ways

Procedia PDF Downloads 519