Search results for: method detection limit
21100 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle
Authors: Hu Ding, Kai Liu, Guoan Tang
Abstract:
The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest
Procedia PDF Downloads 21821099 New Off-Line SPE-GC-MS/MS Method for Determination of Mineral Oil Saturated Hydrocarbons/Mineral Oil Hydrocarbons in Animal Feed, Foods, Infant Formula and Vegetable Oils
Authors: Ovanes Chakoyan
Abstract:
MOH (mineral oil hydrocarbons), which consist of mineral oil saturated hydrocarbons(MOSH) and mineral oil aromatic hydrocarbons(MOAH), are present in various products such as vegetable oils, animal feed, foods, and infant formula. Contamination of foods with mineral oil hydrocarbons, particularly mineral oil aromatic hydrocarbons(MOAH), exhibiting carcinogenic, mutagenic, and hormone-disruptive effects. Identifying toxic substances among the many thousands comprising mineral oils in food samples is a difficult analytical challenge. A method based on an offline-solid phase extraction approach coupled with gas chromatography-triple quadrupole(GC-MS/MS) was developed for the determination of MOSH/MOAH in various products such as vegetable oils, animal feed, foods, and infant formula. A glass solid phase extraction cartridge loaded with 7 g of activated silica gel impregnated with 10 % silver nitrate for removal of olefins and lipids. The MOSH/MOAH fractions were eluated with hexane and hexane: dichloromethane : toluene, respectively. Each eluate was concentrated to 50 µl in toluene and injected on splitless mode into GC-MS/MS. Accuracy of the method was estimated as measurement of recovery of spiked oil samples at 2.0, 15.0, and 30.0 mg kg -1, and recoveries varied from 85 to 105 %. The method was applied to the different types of samples (sunflower meal, chocolate ships, santa milk chocolate, biscuits, infant milk, cornflakes, refined sunflower oil, crude sunflower oil), detecting MOSH up to 56 mg/kg and MOAH up to 5 mg/kg. The limit of quantification(LOQ) of the proposed method was estimated at 0.5 mg/kg and 0.3 mg/kg for MOSH and MOAH, respectively.Keywords: MOSH, MOAH, GC-MS/MS, foods, solid phase extraction
Procedia PDF Downloads 9121098 A Blockchain-Based Protection Strategy against Social Network Phishing
Authors: Francesco Buccafurri, Celeste Romolo
Abstract:
Nowadays phishing is the most frequent starting point of cyber-attack vectors. Phishing is implemented both via email and social network messages. While a wide scientific literature exists which addresses the problem of contrasting email spam-phishing, no specific countermeasure has been so far proposed for phishing included into private messages of social network platforms. Unfortunately, the problem is severe. This paper proposes an approach against social network phishing, based on a non invasive collaborative information-sharing approach which leverages blockchain. The detection method works by filtering candidate messages, by distilling them by means of a distance-preserving hash function, and by publishing hashes over a public blockchain through a trusted smart contract (thus avoiding denial of service attacks). Phishing detection exploits social information embedded into social network profiles to identify similar messages belonging to disjoint contexts. The main contribution of the paper is to introduce a new approach to contrasting the problem of social network phishing, which, despite its severity, received little attention by both research and industry.Keywords: phishing, social networks, information sharing, blockchain
Procedia PDF Downloads 33021097 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface
Authors: Suman Dutta, Manish Agrawal, C. S. Jog
Abstract:
Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method
Procedia PDF Downloads 27521096 Multivariate Statistical Process Monitoring of Base Metal Flotation Plant Using Dissimilarity Scale-Based Singular Spectrum Analysis
Authors: Syamala Krishnannair
Abstract:
A multivariate statistical process monitoring methodology using dissimilarity scale-based singular spectrum analysis (SSA) is proposed for the detection and diagnosis of process faults in the base metal flotation plant. Process faults are detected based on the multi-level decomposition of process signals by SSA using the dissimilarity structure of the process data and the subsequent monitoring of the multiscale signals using the unified monitoring index which combines T² with SPE. Contribution plots are used to identify the root causes of the process faults. The overall results indicated that the proposed technique outperformed the conventional multivariate techniques in the detection and diagnosis of the process faults in the flotation plant.Keywords: fault detection, fault diagnosis, process monitoring, dissimilarity scale
Procedia PDF Downloads 20921095 A Comprehensive Characterization of Cell-free RNA in Spent Blastocyst Medium and Quality Prediction for Blastocyst
Authors: Huajuan Shi
Abstract:
Background: The biopsy of the preimplantation embryo may increase the potential risk and concern of embryo viability. Clinically discarded spent embryo medium (SEM) has entered the view of researchers, sparking an interest in noninvasive embryo screening. However, one of the major restrictions is the extremelty low quantity of cf-RNA, which is difficult to efficiently and unbiased amplify cf-RNA using traditional methods. Hence, there is urgently need to an efficient and low bias amplification method which can comprehensively and accurately obtain cf-RNA information to truly reveal the state of SEM cf-RNA. Result: In this present study, we established an agarose PCR amplification system, and has significantly improved the amplification sensitivity and efficiency by ~90 fold and 9.29 %, respectively. We applied agarose to sequencing library preparation (named AG-seq) to quantify and characterize cf-RNA in SEM. The number of detected cf-RNAs (3533 vs 598) and coverage of 3' end were significantly increased, and the noise of low abundance gene detection was reduced. The increasing percentage 5' end adenine and alternative splicing (AS) events of short fragments (< 400 bp) were discovered by AG-seq. Further, the profiles and characterizations of cf-RNA in spent cleavage medium (SCM) and spent blastocyst medium (SBM) indicated that 4‐mer end motifs of cf-RNA fragments could remarkably differentiate different embryo development stages. Significance: This study established an efficient and low-cost SEM amplification and library preparation method. Not only that, we successfully described the characterizations of SEM cf-RNA of preimplantation embryo by using AG-seq, including abundance features fragment lengths. AG-seq facilitates the study of cf-RNA as a noninvasive embryo screening biomarker and opens up potential clinical utilities of trace samples.Keywords: cell-free RNA, agarose, spent embryo medium, RNA sequencing, non-invasive detection
Procedia PDF Downloads 6421094 Applicability of Fuzzy Logic for Intrusion Detection in Mobile Adhoc Networks
Authors: Ruchi Makani, B. V. R. Reddy
Abstract:
Mobile Adhoc Networks (MANETs) are gaining popularity due to their potential of providing low-cost mobile connectivity solutions to real-world communication problems. Integrating Intrusion Detection Systems (IDS) in MANETs is a tedious task by reason of its distinctive features such as dynamic topology, de-centralized authority and highly controlled/limited resource environment. IDS primarily use automated soft-computing techniques to monitor the inflow/outflow of traffic packets in a given network to detect intrusion. Use of machine learning techniques in IDS enables system to make decisions on intrusion while continuous keep learning about their dynamic environment. An appropriate IDS model is essential to be selected to expedite this application challenges. Thus, this paper focused on fuzzy-logic based machine learning IDS technique for MANETs and presented their applicability for achieving effectiveness in identifying the intrusions. Further, the selection of appropriate protocol attributes and fuzzy rules generation plays significant role for accuracy of the fuzzy-logic based IDS, have been discussed. This paper also presents the critical attributes of MANET’s routing protocol and its applicability in fuzzy logic based IDS.Keywords: AODV, mobile adhoc networks, intrusion detection, anomaly detection, fuzzy logic, fuzzy membership function, fuzzy inference system
Procedia PDF Downloads 17921093 Determination of Vinpocetine in Tablets with the Vinpocetine-Selective Electrode and Possibilities of Application in Pharmaceutical Analysis
Authors: Faisal A. Salih
Abstract:
Vinpocetine (Vin) is an ethyl ester of apovincamic acid and is a semisynthetic derivative of vincamine, an alkaloid from plants of the genus Periwinkle (plant) vinca minor. It was found that this compound stimulates cerebral metabolism: it increases the uptake of glucose and oxygen, as well as the consumption of these substances by the brain tissue. Vinpocetine enhances the flow of blood in the brain and has a vasodilating, antihypertensive, and antiplatelet effect. Vinpocetine seems to improve the human ability to acquire new memories and restore memories that have been lost. This drug has been clinically used for the treatment of cerebrovascular disorders such as stroke and dementia memory disorders, as well as in ophthalmology and otorhinolaryngology. It has no side effects, and no toxicity has been reported when using vinpocetine for a long time. For the quantitative determination of Vin in dosage forms, the HPLC methods are generally used. A promising alternative is potentiometry with Vin- selective electrode, which does not require expensive equipment and materials. Another advantage of the potentiometric method is that the pills and solutions for injections can be used directly without separation from matrix components, which reduces both analysis time and cost. In this study, it was found that the choice of a good plasticizer an electrode with the following membrane composition: PVC (32.8 wt.%), ortho-nitrophenyl octyl ether (66.6 wt.%), tetrakis-4-chlorophenyl borate (0.6 wt.%) exhibits excellent analytical performance: lower detection limit (LDL) 1.2•10⁻⁷ M, linear response range (LRR) 1∙10⁻³–3.9∙10⁻⁶ M, the slope of the electrode function 56.2±0.2 mV/decade). Vin masses per average tablet weight determined by direct potentiometry (DP) and potentiometric titration (PT) methods for the two different sets of 10 tablets were (100.35±0.2–100.36±0.1) mg for two sets of blister packs. The mass fraction of Vin in individual tablets, determined using DP, was (9.87 ± 0.02–10.16 ±0.02) mg, while the RSD was (0.13–0.35%). The procedure has very good reproducibility, and excellent compliance with the declared amounts was observed.Keywords: vinpocetine, potentiometry, ion selective electrode, pharmaceutical analysis
Procedia PDF Downloads 7721092 Developing Ergonomic Prototype Testing Method for Manual Material Handling
Authors: Yusuf Nugroho Doyo Yekti, Budi Praptono, Fransiskus Tatas Dwi Atmaji
Abstract:
There is no ergonomic prototype testing method for manual material handling yet. This study has been carried out to demonstrate the comprehensive ergonomic assessment. The ergonomic assessment is important to improve safety of products and to ensure usefulness of the product. The prototype testing is conducted by involving few intended users and ordinary people. In this study, there are four operators who participated in several tests. Also, there are 30 ordinary people who joined the usability test. All the ordinary people never do material handling activity nor use material handling device. The methods used in the tests are Rapid Entire Body Assessment (REBA), Recommended Weight Limit (RWL), and Cardiovascular Load (%CVL) other than usability test and questionnaire. The proposed testing methods cover comprehensive ergonomic aspects, i.e. physical aspect, mental aspect, emotional aspects of human.Keywords: ergonomic, manual material handling, prototype testing, assessment
Procedia PDF Downloads 51821091 Damage Detection in a Cantilever Beam under Different Excitation and Temperature Conditions
Authors: A. Kyprianou, A. Tjirkallis
Abstract:
Condition monitoring of structures in service is very important as it provides information about the risk of damage development. One of the essential constituents of structural condition monitoring is the damage detection methodology. In the context of condition monitoring of in service structures a damage detection methodology analyses data obtained from the structure while it is in operation. Usually, this means that the data could be affected by operational and environmental conditions in a way that could mask the effects of a possible damage on the data. This, depending on the damage detection methodology, could lead to either false alarms or miss existing damages. In this article a damage detection methodology that is based on the Spatio-temporal continuous wavelet transform (SPT-CWT) analysis of a sequence of experimental time responses of a cantilever beam is proposed. The cantilever is subjected to white and pink noise excitation to simulate different operating conditions. In addition, in order to simulate changing environmental conditions, the cantilever is subjected to heating by a heat gun. The response of the cantilever beam is measured by a high-speed camera. Edges are extracted from the series of images of the beam response captured by the camera. Subsequent processing of the edges gives a series of time responses on 439 points on the beam. This sequence is then analyzed using the SPT-CWT to identify damage. The algorithm proposed was able to clearly identify damage under any condition when the structure was excited by white noise force. In addition, in the case of white noise excitation, the analysis could also reveal the position of the heat gun when it was used to heat the structure. The analysis could identify the different operating conditions i.e. between responses due to white noise excitation and responses due to pink noise excitation. During the pink noise excitation whereas damage and changing temperature were identified it was not possible to clearly identify the effect of damage from that of temperature. The methodology proposed in this article for damage detection enables the separation the damage effect from that due to temperature and excitation on data obtained from measurements of a cantilever beam. This methodology does not require information about the apriori state of the structure.Keywords: spatiotemporal continuous wavelet transform, damage detection, data normalization, varying temperature
Procedia PDF Downloads 27921090 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 13421089 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera
Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl
Abstract:
Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The RMSE between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.Keywords: neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition
Procedia PDF Downloads 10521088 Study on Optimization Design of Pressure Hull for Underwater Vehicle
Authors: Qasim Idrees, Gao Liangtian, Liu Bo, Miao Yiran
Abstract:
In order to improve the efficiency and accuracy of the pressure hull structure, optimization of underwater vehicle based on response surface methodology, a method for optimizing the design of pressure hull structure was studied. To determine the pressure shell of five dimensions as a design variable, the application of thin shell theory and the Chinese Classification Society (CCS) specification was carried on the preliminary design. In order to optimize variables of the feasible region, different methods were studied and implemented such as Opt LHD method (to determine the design test sample points in the feasible domain space), parametric ABAQUS solution for each sample point response, and the two-order polynomial response for the surface model of the limit load of structures. Based on the ultimate load of the structure and the quality of the shell, the two-generation genetic algorithm was used to solve the response surface, and the Pareto optimal solution set was obtained. The final optimization result was 41.68% higher than that of the initial design, and the shell quality was reduced by about 27.26%. The parametric method can ensure the accuracy of the test and improve the efficiency of optimization.Keywords: parameterization, response surface, structure optimization, pressure hull
Procedia PDF Downloads 23421087 Optimized Parameters for Simultaneous Detection of Cd²⁺, Pb²⁺ and CO²⁺ Ions in Water Using Square Wave Voltammetry on the Unmodified Glassy Carbon Electrode
Authors: K. Sruthi, Sai Snehitha Yadavalli, Swathi Gosh Acharyya
Abstract:
Water is the most crucial element for sustaining life on earth. Increasing water pollution directly or indirectly leads to harmful effects on human life. Most of the heavy metal ions are harmful in their cationic form. These heavy metal ions are released by various activities like disposing of batteries, industrial wastes, automobile emissions, and soil contamination. Ions like (Pb, Co, Cd) are carcinogenic and show many harmful effects when consumed more than certain limits proposed by WHO. The simultaneous detection of the heavy metal ions (Pb, Co, Cd), which are highly toxic, is reported in this study. There are many analytical methods for quantifying, but electrochemical techniques are given high priority because of their sensitivity and ability to detect and recognize lower concentrations. Square wave voltammetry was preferred in electrochemical methods due to the absence of background currents which is interference. Square wave voltammetry was performed on GCE for the quantitative detection of ions. Three electrode system consisting of a glassy carbon electrode as the working electrode (3 mm diameter), Ag/Agcl electrode as the reference electrode, and a platinum wire as the counter electrode was chosen for experimentation. The mechanism of detection was done by optimizing the experimental parameters, namely pH, scan rate, and temperature. Under the optimized conditions, square wave voltammetry was performed for simultaneous detection. Scan rates were varied from 5 mV/s to 100 mV/s and found that at 25 mV/s all the three ions were detected simultaneously with proper peaks at particular stripping potential. The variation of pH from 3 to 8 was done where the optimized pH was taken as pH 5 which holds good for three ions. There was a decreasing trend at starting because of hydrogen gas evolution, and after pH 5 again there was a decreasing trend that is because of hydroxide formation on the surface of the working electrode (GCE). The temperature variation from 25˚C to 45˚C was done where the optimum temperature concerning three ions was taken as 35˚C. Deposition and stripping potentials were given as +1.5 V and -1.5 V, and the resting time of 150 seconds was given. Three ions were detected at stripping potentials of Cd²⁺ at -0.84 V, Pb²⁺ at -0.54 V, and Co²⁺ at -0.44 V. The parameters of detection were optimized on a glassy carbon electrode for simultaneous detection of the ions at lower concentrations by square wave voltammetry.Keywords: cadmium, cobalt, lead, glassy carbon electrode, square wave anodic stripping voltammetry
Procedia PDF Downloads 11721086 Nanoarchitectures Cu2S Functions as Effective Surface-Enhanced Raman Scattering Substrates for Molecular Detection Application
Authors: Yu-Kuei Hsu, Ying-Chu Chen, Yan-Gu Lin
Abstract:
The hierarchical Cu2S nano structural film is successfully fabricated via an electroplated ZnO nanorod array as a template and subsequently chemical solution process for the growth of Cu2S in the application of surface-enhanced Raman scattering (SERS) detection. The as-grown Cu2S nano structures were thermally treated at temperature of 150-300 oC under nitrogen atmosphere to improve the crystal quality and unexpectedly induce the Cu nano particles on surface of Cu2S. The structure and composition of thermally treated Cu2S nano structures were carefully analyzed by SEM, XRD, XPS, and XAS. Using 4-aminothiophenol (4-ATP) as probing molecules, the SERS experiments showed that the thermally treated Cu2S nano structures exhibit excellent detecting performance, which could be used as active and cost-effective SERS substrate for ultra sensitive detecting. Additionally, this novel hierarchical SERS substrates show good reproducibility and a linear dependence between analyte concentrations and intensities, revealing the advantage of this method for easily scale-up production.Keywords: cuprous sulfide, copper, nanostructures, surface-enhanced raman scattering
Procedia PDF Downloads 40821085 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 22421084 Robust and Real-Time Traffic Counting System
Authors: Hossam M. Moftah, Aboul Ella Hassanien
Abstract:
In the recent years the importance of automatic traffic control has increased due to the traffic jams problem especially in big cities for signal control and efficient traffic management. Traffic counting as a kind of traffic control is important to know the road traffic density in real time. This paper presents a fast and robust traffic counting system using different image processing techniques. The proposed system is composed of the following four fundamental building phases: image acquisition, pre-processing, object detection, and finally counting the connected objects. The object detection phase is comprised of the following five steps: subtracting the background, converting the image to binary, closing gaps and connecting nearby blobs, image smoothing to remove noises and very small objects, and detecting the connected objects. Experimental results show the great success of the proposed approach.Keywords: traffic counting, traffic management, image processing, object detection, computer vision
Procedia PDF Downloads 29521083 Estimation of Shear Wave Velocity from Cone Penetration Test for Structured Busan Clays
Authors: Vinod K. Singh, S. G. Chung
Abstract:
The degree of structuration of Busan clays at the mouth of Nakdong River mouth was highly influenced by the depositional environment, i.e., flow of the river stream, marine regression, and transgression during the sedimentation process. As a result, the geotechnical properties also varies along the depth with change in degree of structuration. Thus, the in-situ tests such as cone penetration test (CPT) could not be used to predict various geotechnical properties properly by using the conventional empirical methods. In this paper, the shear wave velocity (Vs) was measured from the field using the seismic dilatometer. The Vs was also measured in the laboratory from high quality undisturbed and remolded samples using bender element method to evaluate the degree of structuration. The degree of structuration was quantitatively defined by the modulus ratio of undisturbed to remolded soil samples which is found well correlated with the normalized void ratio (e0/eL) where eL is the void ratio at the liquid limit. It is revealed that the empirical method based on laboratory results incorporating e0/eL can predict Vs from the field more accurately. Thereafter, the CPT based empirical method was developed to estimate the shear wave velocity taking the effect of structuration in the consideration. The developed method was found to predict shear wave velocity reasonably for Busan clays.Keywords: level of structuration, normalized modulus, normalized void ratio, shear wave velocity, site characterization
Procedia PDF Downloads 23521082 Improving Detection of Illegitimate Scores and Assessment in Most Advantageous Tenders
Authors: Hao-Hsi Tseng, Hsin-Yun Lee
Abstract:
The Most Advantageous Tender (MAT) has been criticized for its susceptibility to dictatorial situations and for its processing of same score, same rank issues. This study applies the four criteria from Arrow's Impossibility Theorem to construct a mechanism for revealing illegitimate scores in scoring methods. While commonly be used to improve on problems resulting from extreme scores, ranking methods hide significant defects, adversely affecting selection fairness. To address these shortcomings, this study relies mainly on the overall evaluated score method, using standardized scores plus normal cumulative distribution function conversion to calculate the evaluation of vender preference. This allows for free score evaluations, which reduces the influence of dictatorial behavior and avoiding same score, same rank issues. Large-scale simulations confirm that this method outperforms currently used methods using the Impossibility Theorem.Keywords: Arrow’s impossibility theorem, cumulative normal distribution function, most advantageous tender, scoring method
Procedia PDF Downloads 46421081 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 33521080 A DNA-Based Nano-biosensor for the Rapid Detection of the Dengue Virus in Mosquito
Authors: Lilia M. Fernando, Matthew K. Vasher, Evangelyn C. Alocilja
Abstract:
This paper describes the development of a DNA-based nanobiosensor to detect the dengue virus in mosquito using electrically active magnetic (EAM) nanoparticles as the concentrator and electrochemical transducer. The biosensor detection encompasses two sets of oligonucleotide probes that are specific to the dengue virus: the detector probe labeled with the EAM nanoparticles and the biotinylated capture probe. The DNA targets are double hybridized to the detector and the capture probes and concentrated from nonspecific DNA fragments by applying a magnetic field. Subsequently, the DNA sandwiched targets (EAM-detector probe–DNA target–capture probe-biotin) are captured on streptavidin modified screen printed carbon electrodes through the biotinylated capture probes. Detection is achieved electrochemically by measuring the oxidation–reduction signal of the EAM nanoparticles. Results indicate that the biosensor is able to detect the redox signal of the EAM nanoparticles at dengue DNA concentrations as low as 10 ng/ul.Keywords: dengue, magnetic nanoparticles, mosquito, nanobiosensor
Procedia PDF Downloads 36821079 Detection of Micro-Unmanned Ariel Vehicles Using a Multiple-Input Multiple-Output Digital Array Radar
Authors: Tareq AlNuaim, Mubashir Alam, Abdulrazaq Aldowesh
Abstract:
The usage of micro-Unmanned Ariel Vehicles (UAVs) has witnessed an enormous increase recently. Detection of such drones became a necessity nowadays to prevent any harmful activities. Typically, such targets have low velocity and low Radar Cross Section (RCS), making them indistinguishable from clutter and phase noise. Multiple-Input Multiple-Output (MIMO) Radars have many potentials; it increases the degrees of freedom on both transmit and receive ends. Such architecture allows for flexibility in operation, through utilizing the direct access to every element in the transmit/ receive array. MIMO systems allow for several array processing techniques, permitting the system to stare at targets for longer times, which improves the Doppler resolution. In this paper, a 2×2 MIMO radar prototype is developed using Software Defined Radio (SDR) technology, and its performance is evaluated against a slow-moving low radar cross section micro-UAV used by hobbyists. Radar cross section simulations were carried out using FEKO simulator, achieving an average of -14.42 dBsm at S-band. The developed prototype was experimentally evaluated achieving more than 300 meters of detection range for a DJI Mavic pro-droneKeywords: digital beamforming, drone detection, micro-UAV, MIMO, phased array
Procedia PDF Downloads 14021078 A Hybrid Model Tree and Logistic Regression Model for Prediction of Soil Shear Strength in Clay
Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari
Abstract:
Without a doubt, soil shear strength is the most important property of the soil. The majority of fatal and catastrophic geological accidents are related to shear strength failure of the soil. Therefore, its prediction is a matter of high importance. However, acquiring the shear strength is usually a cumbersome task that might need complicated laboratory testing. Therefore, prediction of it based on common and easy to get soil properties can simplify the projects substantially. In this paper, A hybrid model based on the classification and regression tree algorithm and logistic regression is proposed where each leaf of the tree is an independent regression model. A database of 189 points for clay soil, including Moisture content, liquid limit, plastic limit, clay content, and shear strength, is collected. The performance of the developed model compared to the existing models and equations using root mean squared error and coefficient of correlation.Keywords: model tree, CART, logistic regression, soil shear strength
Procedia PDF Downloads 19721077 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method
Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang
Abstract:
Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.Keywords: Chronic Kidney Disease, Linear Regression, Microfluidics, Urinary Albumin
Procedia PDF Downloads 13721076 Sensitive Determination of Copper(II) by Square Wave Anodic Stripping Voltammetry with Tetracarbonylmolybdenum(0) Multiwalled Carbon Nanotube Paste Electrode
Authors: Illyas Md Isa, Mohamad Idris Saidin, Mustaffa Ahmad, Norhayati Hashim
Abstract:
A highly selective and sensitive carbon paste electrode modified with multiwall carbon nanotubes and 2,6–diacetylpyridine-di-(1R)–(-)–fenchone diazine tetracarbonylmolybdenum(0) complex was used for determination of trace amounts of Cu(II) using square wave anodic stripping voltammetry (SWASV). The influences of experimental variables on the proposed electrode such as pH, supporting electrolyte, preconcentration potential and time, and square wave parameters were investigated. Under optimal conditions, the proposed electrode showed a linear relationship with concentration in the range of 1.0 × 10–10 to 1.0 × 10– 6 M Cu(II) with a limit of detection 8.0 × 10–11 M. The relative standard deviation (n = 5) for a solution containing 1.0 × 10– 6 M of Cu(II) was 0.036. The presence of various cations (in 10 and 100-folds concentration) did not interfere. Electrochemical impedance spectroscopy (EIS) showed that the charge transfer at the electrode-solution interface was favourable. The proposed electrode was applied for the determination of Cu(II) in several water samples. Results agreed very well with those obtained by inductively coupled plasma-optical emission spectrometry. The modified electrode was then proposed as an alternative for determination of Cu(II).Keywords: chemically modified electrode, Cu(II), square wave anodic stripping voltammetry, tetracarbonylmolybdenum(0)
Procedia PDF Downloads 27221075 Electrochemiluminescent Detection of DNA Damage Induced by Tetrachloro-1,4- Benzoquinone Using DNA Sensor
Authors: Tian-Fang Kang, Xue Sun
Abstract:
DNA damage induced by tetrachloro-1,4-benzoquinone (TCBQ), a reactive metabolite of pentachloro-phenol (PCP), was investigated using a glassy carbon electrode (GCE) modified with calf thymus double-stranded DNA (ds-DNA) in this work. DNA modified films were constructed by layer-by-layer adsorption of polycationic poly(diallyldimethyl- ammonium chloride) (PDDA) and negatively charged ds-DNA on the surface of a glassy carbon electrode. The DNA intercalator [Ru(bpy)2(dppz)]2+ (bpy=2, 2′-bipyridine, dppz0dipyrido [3, 2-a: 2′,3′-c] phenazine) was chosen as an electrochemical probe to detect DNA damage. After the sensor was incubated in 0.1 M pH 7.3 phosphate buffer solution (PBS) for 30min, the intact PDDA/DNA film produced a sensitive electrochemiluminescent (ECL) signal. However, after the sensor was incubated in 100 μM TCBQ or a mixed solution of 100 μM TCBQ and 2 mM H2O2, ECL signal decreased significantly. During the incubation of DNA in TCBQ or TCBQ-H2O2 solution, the double-helix of DNA was damaged, which resulted in the decrease of Ru-dppz bound to DNA. Additionally, the results were verified independently by fluorescence experiments. This paper provides a sensitive method to directly screen DNA damage induced by chemicals in the environment.Keywords: DNA damage, detection, electrochemiluminescence, sensor
Procedia PDF Downloads 41021074 Application on Metastable Measurement with Wide Range High Resolution VDL Circuit
Authors: Po-Hui Yang, Jing-Min Chen, Po-Yu Kuo, Chia-Chun Wu
Abstract:
This paper proposed a high resolution Vernier Delay Line (VDL) measurement circuit with coarse and fine detection mechanism, which improved the trade-off problem between high resolution and less delay cells in traditional VDL circuits. And the measuring time of proposed measurement circuit is also under the high resolution requests. At first, the testing range of input signal which proposed high resolution delay line is detected by coarse detection VDL. Moreover, the delayed input signal is transmitted to fine detection VDL for measuring value with better accuracy. This paper is implemented at 0.18μm process, operating frequency is 100 MHz, and the resolution achieved 2.0 ps with only 16-stage delay cells. The test range is 170ps wide, and 17% stages saved compare with traditional single delay line circuit.Keywords: vernier delay line, D-type flip-flop, DFF, metastable phenomenon
Procedia PDF Downloads 59721073 Quartz Crystal Microbalance Based Hydrophobic Nanosensor for Lysozyme Detection
Authors: F. Yılmaz, Y. Saylan, A. Derazshamshir, S. Atay, A. Denizli
Abstract:
Quartz crystal microbalance (QCM), high-resolution mass-sensing technique, measures changes in mass on oscillating quartz crystal surface by measuring changes in oscillation frequency of crystal in real time. Protein adsorption techniques via hydrophobic interaction between protein and solid support, called hydrophobic interaction chromatography (HIC), can be favorable in many cases. Some nanoparticles can be effectively applied for HIC. HIC takes advantage of the hydrophobicity of proteins by promoting its separation on the basis of hydrophobic interactions between immobilized hydrophobic ligands and nonpolar regions on the surface of the proteins. Lysozyme is found in a variety of vertebrate cells and secretions, such as spleen, milk, tears, and egg white. Its common applications are as a cell-disrupting agent for extraction of bacterial intracellular products, as an antibacterial agent in ophthalmologic preparations, as a food additive in milk products and as a drug for treatment of ulcers and infections. Lysozyme has also been used in cancer chemotherapy. The aim of this study is the synthesis of hydrophobic nanoparticles for Lysozyme detection. For this purpose, methacryoyl-L-phenylalanine was chosen as a hydrophobic matrix. The hydrophobic nanoparticles were synthesized by micro-emulsion polymerization method. Then, hydrophobic QCM nanosensor was characterized by Attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy, atomic force microscopy (AFM) and zeta size analysis. Hydrophobic QCM nanosensor was tested for real-time detection of Lysozyme from aqueous solution. The kinetic and affinity studies were determined by using Lysozyme solutions with different concentrations. The responses related to a mass (Δm) and frequency (Δf) shifts were used to evaluate adsorption properties.Keywords: nanosensor, HIC, lysozyme, QCM
Procedia PDF Downloads 34921072 Assessment of a Rapid Detection Sensor of Faecal Pollution in Freshwater
Authors: Ciprian Briciu-Burghina, Brendan Heery, Dermot Brabazon, Fiona Regan
Abstract:
Good quality bathing water is a highly desirable natural resource which can provide major economic, social, and environmental benefits. Both in Ireland and Europe, such water bodies are managed under the European Directive for the management of bathing water quality (BWD). The BWD aims mainly: (i) to improve health protection for bathers by introducing stricter standards for faecal pollution assessment (E. coli, enterococci), (ii) to establish a more pro-active approach to the assessment of possible pollution risks and the management of bathing waters, and (iii) to increase public involvement and dissemination of information to the general public. Standard methods for E. coli and enterococci quantification rely on cultivation of the target organism which requires long incubation periods (from 18h to a few days). This is not ideal when immediate action is required for risk mitigation. Municipalities that oversee the bathing water quality and deploy appropriate signage have to wait for laboratory results. During this time, bathers can be exposed to pollution events and health risks. Although forecasting tools exist, they are site specific and as consequence extensive historical data is required to be effective. Another approach for early detection of faecal pollution is the use of marker enzymes. β-glucuronidase (GUS) is a widely accepted biomarker for E. coli detection in microbiological water quality control. GUS assay is particularly attractive as they are rapid, less than 4 h, easy to perform and they do not require specialised training. A method for on-site detection of GUS from environmental samples in less than 75 min was previously demonstrated. In this study, the capability of ColiSense as an early warning system for faecal pollution in freshwater is assessed. The system successfully detected GUS activity in all of the 45 freshwater samples tested. GUS activity was found to correlate linearly with E. coli (r2=0.53, N=45, p < 0.001) and enterococci (r2=0.66, N=45, p < 0.001) Although GUS is a marker for E. coli, a better correlation was obtained for enterococci. For this study water samples were collected from 5 rivers in the Dublin area over 1 month. This suggests a high diversity of pollution sources (agricultural, industrial, etc) as well as point and diffuse pollution sources were captured in the sample size. Such variety in the source of E. coli can account for different GUS activities/culturable cell and different ratios of viable but not culturable to viable culturable bacteria. A previously developed protocol for the recovery and detection of E. coli was coupled with a miniaturised fluorometer (ColiSense) and the system was assessed for the rapid detection FIB in freshwater samples. Further work will be carried out to evaluate the system’s performance on seawater samples.Keywords: faecal pollution, β-glucuronidase (GUS), bathing water, E. coli
Procedia PDF Downloads 28421071 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities
Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun
Abstract:
The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids
Procedia PDF Downloads 68