Search results for: edge detection method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21725

Search results for: edge detection method

20615 An Auxiliary Technique for Coronary Heart Disease Prediction by Analyzing Electrocardiogram Based on ResNet and Bi-Long Short-Term Memory

Authors: Yang Zhang, Jian He

Abstract:

Heart disease is one of the leading causes of death in the world, and coronary heart disease (CHD) is one of the major heart diseases. Electrocardiogram (ECG) is widely used in the detection of heart diseases, but the traditional manual method for CHD prediction by analyzing ECG requires lots of professional knowledge for doctors. This paper introduces sliding window and continuous wavelet transform (CWT) to transform ECG signals into images, and then ResNet and Bi-LSTM are introduced to build the ECG feature extraction network (namely ECGNet). At last, an auxiliary system for coronary heart disease prediction was developed based on modified ResNet18 and Bi-LSTM, and the public ECG dataset of CHD from MIMIC-3 was used to train and test the system. The experimental results show that the accuracy of the method is 83%, and the F1-score is 83%. Compared with the available methods for CHD prediction based on ECG, such as kNN, decision tree, VGGNet, etc., this method not only improves the prediction accuracy but also could avoid the degradation phenomenon of the deep learning network.

Keywords: Bi-LSTM, CHD, ECG, ResNet, sliding window

Procedia PDF Downloads 91
20614 Automatic Post Stroke Detection from Computed Tomography Images

Authors: C. Gopi Jinimole, A. Harsha

Abstract:

For detecting strokes, Computed Tomography (CT) scan is preferred for imaging the abnormalities or infarction in the brain. Because of the problems in the window settings used to evaluate brain CT images, they are very poor in the early stage infarction detection. This paper presents an automatic estimation method for the window settings of the CT images for proper contrast of the hyper infarction present in the brain. In the proposed work the window width is estimated automatically for each slice and the window centre is changed to a new value of 31HU, which is the average of the HU values of the grey matter and white matter in the brain. The automatic window width estimation is based on the average of median of statistical central moments. Thus with the new suggested window centre and estimated window width, the hyper infarction or post-stroke regions in CT brain images are properly detected. The proposed approach assists the radiologists in CT evaluation for early quantitative signs of delayed stroke, which leads to severe hemorrhage in the future can be prevented by providing timely medication to the patients.

Keywords: computed tomography (CT), hyper infarction or post stroke region, Hounsefield Unit (HU), window centre (WC), window width (WW)

Procedia PDF Downloads 203
20613 Tool Development for Assessing Antineoplastic Drugs Surface Contamination in Healthcare Services and Other Workplaces

Authors: Benoit Atge, Alice Dhersin, Oscar Da Silva Cacao, Beatrice Martinez, Dominique Ducint, Catherine Verdun-Esquer, Isabelle Baldi, Mathieu Molimard, Antoine Villa, Mireille Canal-Raffin

Abstract:

Introduction: Healthcare workers' exposure to antineoplastic drugs (AD) is a burning issue for occupational medicine practitioners. Biological monitoring of occupational exposure (BMOE) is an essential tool for assessing AD contamination of healthcare workers. In addition to BMOE, surface sampling is a useful tool in order to understand how workers get contaminated, to identify sources of environmental contamination, to verify the effectiveness of surface decontamination way and to ensure monitoring of these surfaces. The objective of this work was to develop a complete tool including a kit for surface sampling and a quantification analytical method for AD traces detection. The development was realized with the three following criteria: the kit capacity to sample in every professional environment (healthcare services, veterinaries, etc.), the detection of very low AD traces with a validated analytical method and the easiness of the sampling kit use regardless of the person in charge of sampling. Material and method: AD mostly used in term of quantity and frequency have been identified by an analysis of the literature and consumptions of different hospitals, veterinary services, and home care settings. The kind of adsorbent device, surface moistening solution and mix of solvents for the extraction of AD from the adsorbent device have been tested for a maximal yield. The AD quantification was achieved by an ultra high-performance liquid chromatography method coupled with tandem mass spectrometry (UHPLC-MS/MS). Results: With their high frequencies of use and their good reflect of the diverse activities through healthcare, 15 AD (cyclophosphamide, ifosfamide, doxorubicin, daunorubicin, epirubicin, 5-FU, dacarbazin, etoposide, pemetrexed, vincristine, cytarabine, methothrexate, paclitaxel, gemcitabine, mitomycin C) were selected. The analytical method was optimized and adapted to obtain high sensitivity with very low limits of quantification (25 to 5000ng/mL), equivalent or lowest that those previously published (for 13/15 AD). The sampling kit is easy to use, provided with a didactic support (online video and protocol paper). It showed its effectiveness without inter-individual variation (n=5/person; n= 5 persons; p=0,85; ANOVA) regardless of the person in charge of sampling. Conclusion: This validated tool (sampling kit + analytical method) is very sensitive, easy to use and very didactic in order to control the chemical risk brought by AD. Moreover, BMOE permits a focal prevention. Used in routine, this tool is available for every intervention of occupational health.

Keywords: surface contamination, sampling kit, analytical method, sensitivity

Procedia PDF Downloads 133
20612 Evaluation of Uniformity for Gafchromic Sheets for Film Dosimetry

Authors: Fayzan Ahmed, Saad Bin Saeed, Abdul Qadir Jangda

Abstract:

Gafchromic™ sheet are extensively used for the QA of intensity modulated radiation therapy and other in-vivo dosimetry. Intra-sheet Non-uniformity of scanner as well as film causes undesirable fluctuations which are reflected in dosimetry The aim of this study is to define a systematic and robust method to investigate the intra-sheet uniformity of the unexposed Gafchromic Sheets and the region of interest (ROI) of the scanner. Sheets of lot No#: A05151201 were scanned before and after the expiry period with the EPSON™ XL10000 scanner in the transmission mode, landscape orientation and 72 dpi resolution. ROI of (8’x 10’ inches) equal to the sheet dimension in the center of the scanner is used to acquire images with full transmission, block transmission and with sheets in place. 500 virtual grids, created in MATALB® are imported as a macros in ImageJ (1.49m Wayne Rasband) to analyze the images. In order to remove the edge effects, the outer 86 grids are excluded from the analysis. The standard deviation of the block transmission and full transmission are 0.38% and 0.66% confirming a higher uniformity of the scanner. Expired and non-expired sheets have standard deviations of 2.18% and 1.29%, show that uniformity decreases after expiry. The results are promising and indicates a good potential of this method to be used as a uniformity check for scanner and unexposed Gafchromic sheets.

Keywords: IMRT, film dosimetry, virtual grids, uniformity

Procedia PDF Downloads 495
20611 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 113
20610 Efficiency on the Enteric Viral Removal in Four Potable Water Treatment Plants in Northeastern Colombia

Authors: Raquel Amanda Villamizar Gallardo, Oscar Orlando Ortíz Rodríguez

Abstract:

Enteric viruses are cosmopolitan agents present in several environments including water. These viruses can cause different diseases including gastroenteritis, hepatitis, conjunctivitis, respiratory problems among others. Although in Colombia there are not regulations concerning to routine viral analysis of drinking water, an enhanced understanding of viral pollution and resistance to treatments is desired in order to assure pure water to the population. Viral detection is often complex due to the need of specialized and time-consuming procedures. In addition, viruses are highly diluted in water which is a drawback from the analytical point of view. To this end, a fast and selective detection method for detection enteric viruses (i.e. Hepatitis A and Rotavirus) were applied. Micro- magnetic particles were functionalized with monoclonal antibodies anti-Hepatitis and anti-Rotavirus and they were used to capture, concentrate and separate whole viral particles in raw water and drinking water samples from four treatment plants identified as CAR-01, MON-02, POR-03, TON-04 and located in the Northeastern Colombia. Viruses were molecularly by using RT-PCR One Step Superscript III. Each plant was analyzed at the entry and exit points, in order to determine the initial presence and eventual reduction of Hepatitis A and Rotavirus after disinfection. The results revealed the presence of both enteric viruses in a 100 % of raw water analyzed in all plants. This represents a potential health hazard, especially for those people whose use this water for agricultural purposes. However, in drinking water analysis, enteric viruses was only positive in CAR-01, where was found the presence of Rotavirus. As a conclusion, the results confirm Rotavirus as the best indicator to evaluate the efficacy of potable treatment plant in eliminating viruses. CAR potable water plant should improve their disinfection process in order to remove efficiently enteric viruses.

Keywords: drinking water, hepatitis A, rotavirus, virus removal

Procedia PDF Downloads 233
20609 Numerical Analysis and Influence of the Parameters on Slope Stability

Authors: Fahim Kahlouche, Alaoua Bouaicha, Sihem Chaîbeddra, Sid-Ali Rafa, Abdelhamid Benouali

Abstract:

A designing of a structure requires its realization on rough or sloping ground. Besides the problem of the stability of the landslide, the behavior of the foundations that are bearing the structure is influenced by the destabilizing effect of the ground’s slope. This article focuses on the analysis of the slope stability exposed to loading by introducing the different factors influencing the slope’s behavior on the one hand, and on the influence of this slope on the foundation’s behavior on the other hand. This study is about the elastoplastic modelization using FLAC 2D. This software is based on the finite difference method, which is one of the older methods of numeric resolution of differential equations system with initial and boundary conditions. It was developed for the geotechnical simulation calculation. The aim of this simulation is to demonstrate the notable effect of shear modulus « G », cohesion « C », inclination angle (edge) « β », and distance between the foundation and the head of the slope on the stability of the slope as well as the stability of the foundation. In our simulation, the slope is constituted by homogenous ground. The foundation is considered as rigid/hard; therefore, the loading is made by the application of the vertical strengths on the nodes which represent the contact between the foundation and the ground. 

Keywords: slope, shallow foundation, numeric method, FLAC 2D

Procedia PDF Downloads 290
20608 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection

Authors: Muhammad Ali

Abstract:

Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.

Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection

Procedia PDF Downloads 126
20607 Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation

Authors: Simiao Ren, En Wei

Abstract:

Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning.

Keywords: computer vision, deep learning, YOLOv8, detection, swimming pool, drowning, domain adaptation, generative adversarial network, GAN, GP-GAN

Procedia PDF Downloads 101
20606 Design of Parity-Preserving Reversible Logic Signed Array Multipliers

Authors: Mojtaba Valinataj

Abstract:

Reversible logic as a new favorable design domain can be used for various fields especially creating quantum computers because of its speed and intangible power consumption. However, its susceptibility to a variety of environmental effects may lead to yield the incorrect results. In this paper, because of the importance of multiplication operation in various computing systems, some novel reversible logic array multipliers are proposed with error detection capability by incorporating the parity-preserving gates. The new designs are presented for two main parts of array multipliers, partial product generation and multi-operand addition, by exploiting the new arrangements of existing gates, which results in two signed parity-preserving array multipliers. The experimental results reveal that the best proposed 4×4 multiplier in this paper reaches 12%, 24%, and 26% enhancements in the number of constant inputs, number of required gates, and quantum cost, respectively, compared to previous design. Moreover, the best proposed design is generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.

Keywords: array multipliers, Baugh-Wooley method, error detection, parity-preserving gates, quantum computers, reversible logic

Procedia PDF Downloads 260
20605 A Method for Quantifying Arsenolipids in Sea Water by HPLC-High Resolution Mass Spectrometry

Authors: Muslim Khan, Kenneth B. Jensen, Kevin A. Francesconi

Abstract:

Trace amounts (ca 1 µg/L, 13 nM) of arsenic are present in sea water mostly as the oxyanion arsenate. In contrast, arsenic is present in marine biota (animals and algae) at very high levels (up to100,000 µg/kg) a significant portion of which is present as lipid-soluble compounds collectively termed arsenolipids. The complex nature of sea water presents an analytical challenge to detect trace compounds and monitor their environmental path. We developed a simple method using liquid-liquid extraction combined with HPLC-High Resolution Mass Spectrometer capable of detecting trace of arsenolipids (99 % of the sample matrix while recovering > 80 % of the six target arsenolipids with limit of detection of 0.003 µg/L.)

Keywords: arsenolipids, sea water, HPLC-high resolution mass spectrometry

Procedia PDF Downloads 372
20604 Structure and Optical Properties of Potassium Doped Zinc Oxide

Authors: Lila A. Alkhattaby, Norah A. Alsayegh, Mohammad S. Ansari, Mohammad O. Ansari

Abstract:

In this work, we doped zinc oxide ZnO with potassium K we have synthesized using the sol-gel method. Structural properties were depicted by X-ray diffractometer (XRD) and energy distribution spectroscopy, X-ray diffraction studies confirm the nanosized of the particles and favored orientations along the (100), (002), (101), (102), (110), (103), (200), and (112) planes confirm the hexagonal wurtzite structure of ZnO NPs. The optical properties study using the UV-Vis spectroscopy. The band gap decreases from 4.05 eV to 3.88 eV, the lowest band gap at 10% doped concentration. The photoluminescence (PL) spectroscopy results show two main peaks, a sharp peak at ≈ 384 nm in the UV region and a broad peak around 479 nm in the visible region. The highest intensity of the band-edge luminescence was for 2% doped concentration because of the combined effect of the decreased probability of nonradiative recombination and has better crystallinity.

Keywords: K doped ZnO, photoluminescence spectroscopy, UV-Vis spectroscopy, x-ray spectroscopy

Procedia PDF Downloads 240
20603 Development of Zinc Oxide Coated Carbon Nanoparticles from Pineapples Leaves Using SOL Gel Method for Optimal Adsorption of Copper ion and Reuse in Latent Fingerprint

Authors: Bienvenu Gael Fouda Mbanga, Zikhona Tywabi-Ngeva, Kriveshini Pillay

Abstract:

This work highlighted a new method for preparing Nitrogen carbon nanoparticles fused on zinc oxide nanoparticle nanocomposite (N-CNPs/ZnONPsNC) to remove copper ions (Cu²+) from wastewater by sol-gel method and applying the metal-loaded adsorbent in latent fingerprint application. The N-CNPs/ZnONPsNC showed to be an effective sorbent for optimum Cu²+ sorption at pH 8 and 0.05 g dose. The Langmuir isotherm was found to best fit the process, with a maximum adsorption capacity of 285.71 mg/g, which was higher than most values found in other research for Cu²+ removal. Adsorption was spontaneous and endothermic at 25oC. In addition, the Cu²+-N-CNPs/ZnONPsNC was found to be sensitive and selective for latent fingerprint (LFP) recognition on a range of porous surfaces. As a result, in forensic research, it is an effective distinguishing chemical for latent fingerprint detection.

Keywords: latent fingerprint, nanocomposite, adsorption, copper ions, metal loaded adsorption, adsorbent

Procedia PDF Downloads 85
20602 Multi-Criteria Evaluation of IDS Architectures in Cloud Computing

Authors: Elmahdi Khalil, Saad Enniari, Mostapha Zbakh

Abstract:

Cloud computing promises to increase innovation and the velocity with witch applications are deployed, all while helping any enterprise meet most IT service needs at a lower total cost of ownership and higher return investment. As the march of cloud continues, it brings both new opportunities and new security challenges. To take advantages of those opportunities while minimizing risks, we think that Intrusion Detection Systems (IDS) integrated in the cloud is one of the best existing solutions nowadays in the field. The concept of intrusion detection was known since past and was first proposed by a well-known researcher named Anderson in 1980's. Since that time IDS's are evolving. Although, several efforts has been made in the area of Intrusion Detection systems for cloud computing environment, many attacks still prevail. Therefore, the work presented in this paper proposes a multi criteria analysis and a comparative study between several IDS architectures designated to work in a cloud computing environments. To achieve this objective, in the first place we will search in the state of the art of several consistent IDS architectures designed to work in a cloud environment. Whereas, in a second step we will establish the criteria that will be useful for the evaluation of architectures. Later, using the approach of multi criteria decision analysis Mac Beth (Measuring Attractiveness by a Categorical Based Evaluation Technique we will evaluate the criteria and assign to each one the appropriate weight according to their importance in the field of IDS architectures in cloud computing. The last step is to evaluate architectures against the criteria and collecting results of the model constructed in the previous steps.

Keywords: cloud computing, cloud security, intrusion detection/prevention system, multi-criteria decision analysis

Procedia PDF Downloads 474
20601 “BUM629” Special Hybrid Reinforcement Materials for Mega Structures

Authors: Gautam, Arjun, V. R. Sharma

Abstract:

In the civil construction steel and concrete plays a different role but the same purposes dealing with the design of structures that support or resist loads. Concrete has been used in construction since long time from now. Being brittle and weak in tension, concrete is always reinforced with steel bars for the purposes in beams and columns etc. The paper deals with idea of special designed 3D materials which we named as “BUM629” to be placed/anchored in the structural member and mixed with concrete later on, so as to resist the developments of cracks due to shear failure , buckling,tension and compressive load in concrete. It had cutting edge technology through Draft, Analysis and Design the “BUM629”. The results show that “BUM629” has the great results in Mechanical application. Its material properties are design according to the need of structure; we apply the material such as Mild Steel and Magnesium Alloy. “BUM629” are divided into two parts one is applied at the middle section of concrete member where bending movements are maximum and the second part is laying parallel to vertical bars near clear cover, so we design this material and apply in Reinforcement of Civil Structures. “BUM629” is analysis and design for use in the mega structures like skyscrapers, dams and bridges.

Keywords: BUM629, magnesium alloy, cutting edge technology, mechanical application, draft, analysis and design, mega structures

Procedia PDF Downloads 384
20600 Oxalate Method for Assessing the Electrochemical Surface Area for Ni-Based Nanoelectrodes Used in Formaldehyde Sensing Applications

Authors: S. Trafela, X. Xua, K. Zuzek Rozmana

Abstract:

In this study, we used an accurate and precise method to measure the electrochemically active surface areas (Aecsa) of nickel electrodes. Calculated Aecsa is really important for the evaluation of an electro-catalyst’s activity in electrochemical reaction of different organic compounds. The method involves the electrochemical formation of Ni(OH)₂ and NiOOH in the presence of adsorbed oxalate in alkaline media. The studies were carried out using cyclic voltammetry with polycrystalline nickel as a reference material and electrodeposited nickel nanowires, homogeneous and heterogeneous nickel films. From cyclic voltammograms, the charge (Q) values for the formation of Ni(OH)₂ and NiOOH surface oxides were calculated under various conditions. At sufficiently fast potential scan rates (200 mV s⁻¹), the adsorbed oxalate limits the growth of the surface hydroxides to a monolayer. Although the Ni(OH)₂/NiOOH oxidation peak overlaps with the oxygen evolution reaction, in the reverse scan, the NiOOH/ Ni(OH)₂ reduction peak is well-separated from other electrochemical processes and can be easily integrated. The values of these integrals were used to correlate experimentally measured charge density with an electrochemically active surface layer. The Aecsa of the nickel nanowires, homogeneous and heterogeneous nickel films were calculated to be Aecsa-NiNWs = 4.2066 ± 0.0472 cm², Aecsa-homNi = 1.7175 ± 0.0503 cm² and Aecsa-hetNi = 2.1862 ± 0.0154 cm². These valuable results were expanded and used in electrochemical studies of formaldehyde oxidation. As mentioned nickel nanowires, heterogeneous and homogeneous nickel films were used as simple and efficient sensor for formaldehyde detection. For this purpose, electrodeposited nickel electrodes were modified in 0.1 mol L⁻¹ solution of KOH in order to expect electrochemical activity towards formaldehyde. The investigation of the electrochemical behavior of formaldehyde oxidation in 0.1 mol L⁻¹ NaOH solution at the surface of modified nickel nanowires, homogeneous and heterogeneous nickel films were carried out by means of electrochemical techniques such as cyclic voltammetric and chronoamperometric methods. From investigations of effect of different formaldehyde concentrations (from 0.001 to 0.1 mol L⁻¹) on electrochemical signal - current we provided catalysis mechanism of formaldehyde oxidation, detection limit and sensitivity of nickel electrodes. The results indicated that nickel electrodes participate directly in the electrocatalytic oxidation of formaldehyde. In the overall reaction, formaldehyde in alkaline aqueous solution exists predominantly in form of CH₂(OH)O⁻, which is oxidized to CH₂(O)O⁻. Taking into account the determined (Aecsa) values we have been able to calculate the sensitivities: 7 mA mol L⁻¹ cm⁻² for nickel nanowires, 3.5 mA mol L⁻¹ cm⁻² for heterogeneous nickel film and 2 mA mol L⁻¹ cm⁻² for heterogeneous nickel film. The detection limit was 0.2 mM for nickel nanowires, 0.5 mM for porous Ni film and 0.8 mM for homogeneous Ni film. All of these results make nickel electrodes capable for further applications.

Keywords: electrochemically active surface areas, nickel electrodes, formaldehyde, electrocatalytic oxidation

Procedia PDF Downloads 162
20599 An Entropy Based Novel Algorithm for Internal Attack Detection in Wireless Sensor Network

Authors: Muhammad R. Ahmed, Mohammed Aseeri

Abstract:

Wireless Sensor Network (WSN) consists of low-cost and multi functional resources constrain nodes that communicate at short distances through wireless links. It is open media and underpinned by an application driven technology for information gathering and processing. It can be used for many different applications range from military implementation in the battlefield, environmental monitoring, health sector as well as emergency response of surveillance. With its nature and application scenario, security of WSN had drawn a great attention. It is known to be valuable to variety of attacks for the construction of nodes and distributed network infrastructure. In order to ensure its functionality especially in malicious environments, security mechanisms are essential. Malicious or internal attacker has gained prominence and poses the most challenging attacks to WSN. Many works have been done to secure WSN from internal attacks but most of it relay on either training data set or predefined threshold. Without a fixed security infrastructure a WSN needs to find the internal attacks is a challenge. In this paper we present an internal attack detection method based on maximum entropy model. The final experimental works showed that the proposed algorithm does work well at the designed level.

Keywords: internal attack, wireless sensor network, network security, entropy

Procedia PDF Downloads 456
20598 Using Electro-Biogrouting to Stabilize of Soft Soil

Authors: Hamed A. Keykha, Hadi Miri

Abstract:

This paper describes a new method of soil stabilisation, electro-biogrouting (EBM), for improvement of soft soil with low hydraulic conductivity. This method uses an applied voltage gradient across the soil to induce the ions and bacteria cells through the soil matrix, resulting in CaCO3 precipitation and an increase of the soil shear strength in the process. The EBM were used effectively with two injection methods; bacteria injection and products of bacteria injection. The bacteria cells, calcium ions and urea were moved across the soil by electromigration and electro osmotic flow respectively. The products of bacteria (CO3-2) were moved by electromigration. The results showed that the undrained shear strength of the soil increased from 6 to 65 and 70 kPa for first and second injection method respectively. The injection of carbonate solution and calcium could be effectively flowed in the clay soil compare to injection of bacteria cells. The detection of CaCO3 percentage and its corresponding water content across the specimen showed that the increase of undrained shear strength relates to the deposit of calcite crystals between soil particles.

Keywords: Sporosarcina pasteurii, electrophoresis, electromigration, electroosmosis, biocement

Procedia PDF Downloads 529
20597 Electrochemical APEX for Genotyping MYH7 Gene: A Low Cost Strategy for Minisequencing of Disease Causing Mutations

Authors: Ahmed M. Debela, Mayreli Ortiz , Ciara K. O´Sullivan

Abstract:

The completion of the human genome Project (HGP) has paved the way for mapping the diversity in the overall genome sequence which helps to understand the genetic causes of inherited diseases and susceptibility to drugs or environmental toxins. Arrayed primer extension (APEX) is a microarray based minisequencing strategy for screening disease causing mutations. It is derived from Sanger DNA sequencing and uses fluorescently dideoxynucleotides (ddNTPs) for termination of a growing DNA strand from a primer with its 3´- end designed immediately upstream of a site where single nucleotide polymorphism (SNP) occurs. The use of DNA polymerase offers a very high accuracy and specificity to APEX which in turn happens to be a method of choice for multiplex SNP detection. Coupling the high specificity of this method with the high sensitivity, low cost and compatibility for miniaturization of electrochemical techniques would offer an excellent platform for detection of mutation as well as sequencing of DNA templates. We are developing an electrochemical APEX for the analysis of SNPs found in the MYH7 gene for group of cardiomyopathy patients. ddNTPs were labeled with four different redox active compounds with four distinct potentials. Thiolated oligonucleotide probes were immobilised on gold and glassy carbon substrates which are followed by hybridisation with complementary target DNA just adjacent to the base to be extended by polymerase. Electrochemical interrogation was performed after the incorporation of the redox labelled dedioxynucleotide. The work involved the synthesis and characterisation of the redox labelled ddNTPs, optimisation and characterisation of surface functionalisation strategies and the nucleotide incorporation assays.

Keywords: array based primer extension, labelled ddNTPs, electrochemical, mutations

Procedia PDF Downloads 246
20596 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing

Authors: Tolulope Aremu

Abstract:

The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.

Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods

Procedia PDF Downloads 21
20595 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 277
20594 Neuron Imaging in Lateral Geniculate Nucleus

Authors: Sandy Bao, Yankang Bao

Abstract:

The understanding of information that is being processed in the brain, especially in the lateral geniculate nucleus (LGN), has been proven challenging for modern neuroscience and for researchers with a focus on how neurons process signals and images. In this paper, we are proposing a method to image process different colors within different layers of LGN, that is, green information in layers 4 & 6 and red & blue in layers 3 & 5 based on the surface dimension of layers. We take into consideration the images in LGN and visual cortex, and that the edge detected information from the visual cortex needs to be considered in order to return back to the layers of LGN, along with the image in LGN to form the new image, which will provide an improved image that is clearer, sharper, and making it easier to identify objects in the image. Matrix Laboratory (MATLAB) simulation is performed, and results show that the clarity of the output image has significant improvement.

Keywords: lateral geniculate nucleus, matrix laboratory, neuroscience, visual cortex

Procedia PDF Downloads 280
20593 Detection of Intravenous Infiltration Using Impedance Parameters in Patients in a Long-Term Care Hospital

Authors: Ihn Sook Jeong, Eun Joo Lee, Jae Hyung Kim, Gun Ho Kim, Young Jun Hwang

Abstract:

This study investigated intravenous (IV) infiltration using bioelectrical impedance for 27 hospitalized patients in a long-term care hospital. Impedance parameters showed significant differences before and after infiltration as follows. First, the resistance (R) after infiltration significantly decreased compared to the initial resistance. This indicates that the IV solution flowing from the vein due to infiltration accumulates in the extracellular fluid (ECF). Second, the relative resistance at 50 kHz was 0.94 ± 0.07 in 9 subjects without infiltration and was 0.75 ± 0.12 in 18 subjects with infiltration. Third, the magnitude of the reactance (Xc) decreased after infiltration. This is because IV solution and blood components released from the vein tend to aggregate in the cell membrane (and acts analogously to the linear/parallel circuit), thereby increasing the capacitance (Cm) of the cell membrane and reducing the magnitude of reactance. Finally, the data points plotted in the R-Xc graph were distributed on the upper right before infiltration but on the lower left after infiltration. This indicates that the infiltration caused accumulation of fluid or blood components in the epidermal and subcutaneous tissues, resulting in reduced resistance and reactance, thereby lowering integrity of the cell membrane. Our findings suggest that bioelectrical impedance is an effective method for detection of infiltration in a noninvasive and quantitative manner.

Keywords: intravenous infiltration, impedance, parameters, resistance, reactance

Procedia PDF Downloads 183
20592 Financial Statement Fraud: The Need for a Paradigm Shift to Forensic Accounting

Authors: Ifedapo Francis Awolowo

Abstract:

The unrelenting series of embarrassing audit failures should stimulate a paradigm shift in accounting. And in this age of information revolution, there is need for a constant improvement on the products or services one offers to the market in order to be relevant. This study explores the perceptions of external auditors, forensic accountants and accounting academics on whether a paradigm shift to forensic accounting can reduce financial statement frauds. Through Neo-empiricism/inductive analytical approach, findings reveal that a paradigm shift to forensic accounting might be the right step in the right direction in order to increase the chances of fraud prevention and detection in the financial statement. This research has implication on accounting education on the need to incorporate forensic accounting into present day accounting curriculum. Accounting professional bodies, accounting standard setters and accounting firms all have roles to play in incorporating forensic accounting education into accounting curriculum. Particularly, there is need to alter the ISA 240 to make the prevention and detection of frauds the responsibilities of bot those charged with the management and governance of companies and statutory auditors.

Keywords: financial statement fraud, forensic accounting, fraud prevention and detection, auditing, audit expectation gap, corporate governance

Procedia PDF Downloads 368
20591 Mesoporous Carbon Ceramic SiO2/C Prepared by Sol-Gel Method and Modified with Cobalt Phthalocyanine and Used as an Electrochemical Sensor for Nitrite

Authors: Abdur Rahim, Lauro Tatsuo Kubota, Yoshitaka Gushikem

Abstract:

Carbon ceramic mesoporous SiO2/50wt%C (SBET= 170 m2g-1), where C is graphite, was prepared by the sol gel method. Scanning electron microscopy images and the respective element mapping showed that, within the magnification used, no phase segregation was detectable. It presented the electric conductivities of 0.49 S cm-1. This material was used to support cobalt phthalocyanine, prepared in situ, to assure a homogeneous dispersion of the electro active complex in the pores of the matrix. The surface density of cobalt phthalocyanine, on the matrix surfaces was 0.015 mol cm-2. Pressed disk, made with SiO2/50wt%C/CoPc, was used to fabricate an electrode and tested as sensors for nitrite determination by electro chemical technique. A linear response range between 0.039 and 0.42 mmol l−1,and correlation coefficient r=0.9996 was obtained. The electrode was chemically very stable and presented very high sensitivity for this analyte, with a limit of detection, LOD = 1.087 x 10-6 mol L-1.

Keywords: SiO2/C/CoPc, sol-gel method, electrochemical sensor, nitrite oxidation, carbon ceramic material, cobalt phthalocyanine

Procedia PDF Downloads 317
20590 Rapid and Sensitive Detection: Biosensors as an Innovative Analytical Tools

Authors: Sylwia Baluta, Joanna Cabaj, Karol Malecha

Abstract:

The evolution of biosensors was driven by the need for faster and more versatile analytical methods for application in important areas including clinical, diagnostics, food analysis or environmental monitoring, with minimum sample pretreatment. Rapid and sensitive neurotransmitters detection is extremely important in modern medicine. These compounds mainly occur in the brain and central nervous system of mammals. Any changes in the neurotransmitters concentration may lead to many diseases, such as Parkinson’s or schizophrenia. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements.

Keywords: adrenaline, biosensor, dopamine, laccase, tyrosinase

Procedia PDF Downloads 143
20589 Multiscale Connected Component Labelling and Applications to Scientific Microscopy Image Processing

Authors: Yayun Hsu, Henry Horng-Shing Lu

Abstract:

In this paper, a new method is proposed to extending the method of connected component labeling from processing binary images to multi-scale modeling of images. By using the adaptive threshold of multi-scale attributes, this approach minimizes the possibility of missing those important components with weak intensities. In addition, the computational cost of this approach remains similar to that of the typical approach of component labeling. Then, this methodology is applied to grain boundary detection and Drosophila Brain-bow neuron segmentation. These demonstrate the feasibility of the proposed approach in the analysis of challenging microscopy images for scientific discovery.

Keywords: microscopic image processing, scientific data mining, multi-scale modeling, data mining

Procedia PDF Downloads 435
20588 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques

Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña

Abstract:

The automatic detection of indigenous languages ​​in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.

Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages

Procedia PDF Downloads 18
20587 Automatic Segmentation of the Clean Speech Signal

Authors: M. A. Ben Messaoud, A. Bouzid, N. Ellouze

Abstract:

Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The multi-scale product is based on making the product of the speech wavelet transform coefficients at three successive dyadic scales. We have evaluated our method on the Keele database. Experimental results show the effectiveness of our method presenting a good performance. It shows that the two simple features can find word boundaries, and extracted the segments of the clean speech.

Keywords: multiscale product, spectral centroid, speech segmentation, zero crossings rate

Procedia PDF Downloads 501
20586 The Impact of Recurring Events in Fake News Detection

Authors: Ali Raza, Shafiq Ur Rehman Khan, Raja Sher Afgun Usmani, Asif Raza, Basit Umair

Abstract:

Detection of Fake news and missing information is gaining popularity, especially after the advancement in social media and online news platforms. Social media platforms are the main and speediest source of fake news propagation, whereas online news websites contribute to fake news dissipation. In this study, we propose a framework to detect fake news using the temporal features of text and consider user feedback to identify whether the news is fake or not. In recent studies, the temporal features in text documents gain valuable consideration from Natural Language Processing and user feedback and only try to classify the textual data as fake or true. This research article indicates the impact of recurring and non-recurring events on fake and true news. We use two models BERT and Bi-LSTM to investigate, and it is concluded from BERT we get better results and 70% of true news are recurring and rest of 30% are non-recurring.

Keywords: natural language processing, fake news detection, machine learning, Bi-LSTM

Procedia PDF Downloads 25