Search results for: automatic detection,XSS detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4150

Search results for: automatic detection,XSS detection

2680 Use of Galileo Advanced Features in Maritime Domain

Authors: Olivier Chaigneau, Damianos Oikonomidis, Marie-Cecile Delmas

Abstract:

GAMBAS (Galileo Advanced features for the Maritime domain: Breakthrough Applications for Safety and security) is a project funded by the European Space Program Agency (EUSPA) aiming at identifying the search-and-rescue and ship security alert system needs for maritime users (including operators and fishing stakeholders) and developing operational concepts to answer these needs. The general objective of the GAMBAS project is to support the deployment of Galileo exclusive features in the maritime domain in order to improve safety and security at sea, detection of illegal activities and associated surveillance means, resilience to natural and human-induced emergency situations, and develop, integrate, demonstrate, standardize and disseminate these new associated capabilities. The project aims to demonstrate: improvement of the SAR (Search And Rescue) and SSAS (Ship Security Alert System) detection and response to maritime distress through the integration of new features into the beacon for SSAS in terms of cost optimization, user-friendly aspects, integration of Galileo and OS NMA (Open Service Navigation Message Authentication) reception for improved authenticated localization performance and reliability, and at sea triggering capabilities, optimization of the responsiveness of RCCs (Rescue Co-ordination Centre) towards the distress situations affecting vessels, the adaptation of the MCCs (Mission Control Center) and MEOLUT (Medium Earth Orbit Local User Terminal) to the data distribution of SSAS alerts.

Keywords: Galileo new advanced features, maritime, safety, security

Procedia PDF Downloads 92
2679 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 169
2678 A Comprehensive Characterization of Cell-free RNA in Spent Blastocyst Medium and Quality Prediction for Blastocyst

Authors: Huajuan Shi

Abstract:

Background: The biopsy of the preimplantation embryo may increase the potential risk and concern of embryo viability. Clinically discarded spent embryo medium (SEM) has entered the view of researchers, sparking an interest in noninvasive embryo screening. However, one of the major restrictions is the extremelty low quantity of cf-RNA, which is difficult to efficiently and unbiased amplify cf-RNA using traditional methods. Hence, there is urgently need to an efficient and low bias amplification method which can comprehensively and accurately obtain cf-RNA information to truly reveal the state of SEM cf-RNA. Result: In this present study, we established an agarose PCR amplification system, and has significantly improved the amplification sensitivity and efficiency by ~90 fold and 9.29 %, respectively. We applied agarose to sequencing library preparation (named AG-seq) to quantify and characterize cf-RNA in SEM. The number of detected cf-RNAs (3533 vs 598) and coverage of 3' end were significantly increased, and the noise of low abundance gene detection was reduced. The increasing percentage 5' end adenine and alternative splicing (AS) events of short fragments (< 400 bp) were discovered by AG-seq. Further, the profiles and characterizations of cf-RNA in spent cleavage medium (SCM) and spent blastocyst medium (SBM) indicated that 4‐mer end motifs of cf-RNA fragments could remarkably differentiate different embryo development stages. Significance: This study established an efficient and low-cost SEM amplification and library preparation method. Not only that, we successfully described the characterizations of SEM cf-RNA of preimplantation embryo by using AG-seq, including abundance features fragment lengths. AG-seq facilitates the study of cf-RNA as a noninvasive embryo screening biomarker and opens up potential clinical utilities of trace samples.

Keywords: cell-free RNA, agarose, spent embryo medium, RNA sequencing, non-invasive detection

Procedia PDF Downloads 64
2677 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset

Procedia PDF Downloads 130
2676 A Microwave and Millimeter-Wave Transmit/Receive Switch Subsystem for Communication Systems

Authors: Donghyun Lee, Cam Nguyen

Abstract:

Multi-band systems offer a great deal of benefit in modern communication and radar systems. In particular, multi-band antenna-array radar systems with their extended frequency diversity provide numerous advantages in detection, identification, locating and tracking a wide range of targets, including enhanced detection coverage, accurate target location, reduced survey time and cost, increased resolution, improved reliability and target information. An accurate calibration is a critical issue in antenna array systems. The amplitude and phase errors in multi-band and multi-polarization antenna array transceivers result in inaccurate target detection, deteriorated resolution and reduced reliability. Furthermore, the digital beam former without the RF domain phase-shifting is less immune to unfiltered interference signals, which can lead to receiver saturation in array systems. Therefore, implementing integrated front-end architecture, which can support calibration function with low insertion and filtering function from the farthest end of an array transceiver is of great interest. We report a dual K/Ka-band T/R/Calibration switch module with quasi-elliptic dual-bandpass filtering function implementing a Q-enhanced metamaterial transmission line. A unique dual-band frequency response is incorporated in the reception and calibration path of the proposed switch module utilizing the composite right/left-handed meta material transmission line coupled with a Colpitts-style negative generation circuit. The fabricated fully integrated T/R/Calibration switch module in 0.18-μm BiCMOS technology exhibits insertion loss of 4.9-12.3 dB and isolation of more than 45 dB in the reception, transmission and calibration mode of operation. In the reception and calibration mode, the dual-band frequency response centered at 24.5 and 35 GHz exhibits out-of-band rejection of more than 30 dB compared to the pass bands below 10.5 GHz and above 59.5 GHz. The rejection between the pass bands reaches more than 50 dB. In all modes of operation, the IP1-dB is between 4 and 11 dBm. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: microwaves, millimeter waves, T/R switch, wireless communications, wireless communications

Procedia PDF Downloads 160
2675 Evaluating Traffic Congestion Using the Bayesian Dirichlet Process Mixture of Generalized Linear Models

Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig

Abstract:

This study applied traffic speed and occupancy to develop clustering models that identify different traffic conditions. Particularly, these models are based on the Dirichlet Process Mixture of Generalized Linear regression (DML) and change-point regression (CR). The model frameworks were implemented using 2015 historical traffic data aggregated at a 15-minute interval from an Interstate 295 freeway in Jacksonville, Florida. Using the deviance information criterion (DIC) to identify the appropriate number of mixture components, three traffic states were identified as free-flow, transitional, and congested condition. Results of the DML revealed that traffic occupancy is statistically significant in influencing the reduction of traffic speed in each of the identified states. Influence on the free-flow and the congested state was estimated to be higher than the transitional flow condition in both evening and morning peak periods. Estimation of the critical speed threshold using CR revealed that 47 mph and 48 mph are speed thresholds for congested and transitional traffic condition during the morning peak hours and evening peak hours, respectively. Free-flow speed thresholds for morning and evening peak hours were estimated at 64 mph and 66 mph, respectively. The proposed approaches will facilitate accurate detection and prediction of traffic congestion for developing effective countermeasures.

Keywords: traffic congestion, multistate speed distribution, traffic occupancy, Dirichlet process mixtures of generalized linear model, Bayesian change-point detection

Procedia PDF Downloads 294
2674 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 288
2673 Design and Simulation of a Radiation Spectrometer Using Scintillation Detectors

Authors: Waleed K. Saib, Abdulsalam M. Alhawsawi, Essam Banoqitah

Abstract:

The idea of this research is to design a radiation spectrometer using LSO scintillation detector coupled to a C series of SiPM (silicon photomultiplier). The device can be used to detects gamma and X-ray radiation. This device is also designed to estimates the activity of the source contamination. The SiPM will detect light in the visible range above the threshold and read them as counts. Three gamma sources were used for these experiments Cs-137, Am-241 and Co-60 with various activities. These sources are applied for four experiments operating the SiPM as a spectrometer, energy resolution, pile-up set and efficiency. The SiPM is connected to a MCA to perform as a spectrometer. Cerium doped Lutetium Silicate (Lu₂SiO₅) with light yield 26000 photons/Mev coupled with the SiPM. As a result, all the main features of the Cs-137, Am-241 and Co-60 are identified in MCA. The experiment shows how photon energy and probability of interaction are inversely related. Total attenuation reduces as photon energy increases. An analytical calculation was made to obtain the FWHM resolution for each gamma source. The FWHM resolution for Am-241 (59 keV) is 28.75 %, for Cs-137 (662 keV) is 7.85 %, for Co-60 (1173 keV) is 4.46 % and for Co-60 (1332 keV) is 3.70%. Moreover, the experiment shows that the dead time and counts number decreased when the pile-up rejection was disabled and the FWHM decreased when the pile-up was enabled. The efficiencies were calculated at four different distances from the detector 2, 4, 8 and 16 cm. The detection efficiency was observed to declined exponentially with increasing distance from the detector face. Conclusively, the SiPM board operated with an LSO scintillator crystal as a spectrometer. The SiPM energy resolution for the three gamma sources used was a decent comparison to other PMTs.

Keywords: PMT, radiation, radiation detection, scintillation detectors, silicon photomultiplier, spectrometer

Procedia PDF Downloads 155
2672 Model-Based Fault Diagnosis in Carbon Fiber Reinforced Composites Using Particle Filtering

Authors: Hong Yu, Ion Matei

Abstract:

Carbon fiber reinforced composites (CFRP) used as aircraft structure are subject to lightning strike, putting structural integrity under risk. Indirect damage may occur after a lightning strike where the internal structure can be damaged due to excessive heat induced by lightning current, while the surface of the structures remains intact. Three damage modes may be observed after a lightning strike: fiber breakage, inter-ply delamination and intra-ply cracks. The assessment of internal damage states in composite is challenging due to complicated microstructure, inherent uncertainties, and existence of multiple damage modes. In this work, a model based approach is adopted to diagnose faults in carbon composites after lighting strikes. A resistor network model is implemented to relate the overall electrical and thermal conduction behavior under simulated lightning current waveform to the intrinsic temperature dependent material properties, microstructure and degradation of materials. A fault detection and identification (FDI) module utilizes the physics based model and a particle filtering algorithm to identify damage mode as well as calculate the probability of structural failure. Extensive simulation results are provided to substantiate the proposed fault diagnosis methodology with both single fault and multiple faults cases. The approach is also demonstrated on transient resistance data collected from a IM7/Epoxy laminate under simulated lightning strike.

Keywords: carbon composite, fault detection, fault identification, particle filter

Procedia PDF Downloads 195
2671 Advanced Magnetic Resonance Imaging in Differentiation of Neurocysticercosis and Tuberculoma

Authors: Rajendra N. Ghosh, Paramjeet Singh, Niranjan Khandelwal, Sameer Vyas, Pratibha Singhi, Naveen Sankhyan

Abstract:

Background: Tuberculoma and neurocysticercosis (NCC) are two most common intracranial infections in developing country. They often simulate on neuroimaging and in absence of typical imaging features cause significant diagnostic dilemmas. Differentiation is extremely important to avoid empirical exposure to antitubercular medications or nonspecific treatment causing disease progression. Purpose: Better characterization and differentiation of CNS tuberculoma and NCC by using morphological and multiple advanced functional MRI. Material and Methods: Total fifty untreated patients (20 tuberculoma and 30 NCC) were evaluated by using conventional and advanced sequences like CISS, SWI, DWI, DTI, Magnetization transfer (MT), T2Relaxometry (T2R), Perfusion and Spectroscopy. rCBV,ADC,FA,T2R,MTR values and metabolite ratios were calculated from lesion and normal parenchyma. Diagnosis was confirmed by typical biochemical, histopathological and imaging features. Results: CISS was most useful sequence for scolex detection (90% on CISS vs 73% on routine sequences). SWI showed higher scolex detection ability. Mean values of ADC, FA,T2R from core and rCBV from wall of lesion were significantly different in tuberculoma and NCC (P < 0.05). Mean values of rCBV, ADC, T2R and FA for tuberculoma and NCC were (3.36 vs1.3), (1.09x10⁻³vs 1.4x10⁻³), (0.13 x10⁻³ vs 0.09 x10⁻³) and (88.65 ms vs 272.3 ms) respectively. Tuberculomas showed high lipid peak, more choline and lower creatinine with Ch/Cr ratio > 1. T2R value was most significant parameter for differentiation. Cut off values for each significant parameters have proposed. Conclusion: Quantitative MRI in combination with conventional sequences can better characterize and differentiate similar appearing tuberculoma and NCC and may be incorporated in routine protocol which may avoid brain biopsy and empirical therapy.

Keywords: advanced functional MRI, differentiation, neurcysticercosis, tuberculoma

Procedia PDF Downloads 567
2670 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 88
2669 New Derivatives 7-(diethylamino)quinolin-2-(1H)-one Based Chalcone Colorimetric Probes for Detection of Bisulfite Anion in Cationic Micellar Media

Authors: Guillermo E. Quintero, Edwin G. Perez, Oriel Sanchez, Christian Espinosa-Bustos, Denis Fuentealba, Margarita E. Aliaga

Abstract:

Bisulfite ion (HSO3-) has been used as a preservative in food, drinks, and medication. However, it is well-known that HSO3- can cause health problems like asthma and allergic reactions in people. Due to the above, the development of analytical methods for detecting this ion has gained great interest. In line with the above, the current use of colorimetric and/or fluorescent probes as a detection technique has acquired great relevance due to their high sensitivity and accuracy. In this context, 2-quinolinone derivatives have been found to possess promising activity as antiviral agents, sensitizers in solar cells, antifungals, antioxidants, and sensors. In particular, 7-(diethylamino)-2-quinolinone derivatives have attracted attention in recent years since their suitable photophysical properties become promising fluorescent probes. In Addition, there is evidence that photophysical properties and reactivity can be affected by the study medium, such as micellar media. Based on the above background, 7-(diethylamino)-2-quinolinone derivatives based chalcone will be able to be incorporated into a cationic micellar environment (Cetyltrimethylammonium bromide, CTAB). Furthermore, the supramolecular control induced by the micellar environment will increase the reactivity of these derivatives towards nucleophilic analytes such as HSO3- (Michael-type addition reaction), leading to the generation of new colorimetric and/or fluorescent probes. In the present study, two derivatives of 7-(diethylamino)-2-quinolinone based chalcone DQD1-2 were synthesized according to the method reported by the literature. These derivatives were structurally characterized by 1H, 13C NMR, and HRMS-ESI. In addition, UV-VIS and fluorescence studies determined absorption bands near 450 nm, emission bands near 600 nm, fluorescence quantum yields near 0.01, and fluorescence lifetimes of 5 ps. In line with the foregoing, these photophysical properties aforementioned were improved in the presence of a cationic micellar medium using CTAB thanks to the formation of adducts presenting association constants of the order of 2,5x105 M-1, increasing the quantum yields to 0.12 and the fluorescence lifetimes corresponding to two lifetimes near to 120 and 400 ps for DQD1 and DQD2. Besides, thanks to the presence of the micellar medium, the reactivity of these derivatives with nucleophilic analytes, such as HSO3-, was increased. This was achieved through kinetic studies, which demonstrated an increase in the bimolecular rate constants in the presence of a micellar medium. Finally, probe DQD1 was chosen as the best sensor since it was assessed to detect HSO3- with excellent results.

Keywords: bisulfite detection, cationic micelle, colorimetric probes, quinolinone derivatives

Procedia PDF Downloads 94
2668 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorraina, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: cruciform specimen, multiaxial fatigue, nickel-based superalloy

Procedia PDF Downloads 295
2667 An Intelligent Transportation System for Safety and Integrated Management of Railway Crossings

Authors: M. Magrini, D. Moroni, G. Palazzese, G. Pieri, D. Azzarelli, A. Spada, L. Fanucci, O. Salvetti

Abstract:

Railway crossings are complex entities whose optimal management cannot be addressed unless with the help of an intelligent transportation system integrating information both on train and vehicular flows. In this paper, we propose an integrated system named SIMPLE (Railway Safety and Infrastructure for Mobility applied at level crossings) that, while providing unparalleled safety in railway level crossings, collects data on rail and road traffic and provides value-added services to citizens and commuters. Such services include for example alerts, via variable message signs to drivers and suggestions for alternative routes, towards a more sustainable, eco-friendly and efficient urban mobility. To achieve these goals, SIMPLE is organized as a System of Systems (SoS), with a modular architecture whose components range from specially-designed radar sensors for obstacle detection to smart ETSI M2M-compliant camera networks for urban traffic monitoring. Computational unit for performing forecast according to adaptive models of train and vehicular traffic are also included. The proposed system has been tested and validated during an extensive trial held in the mid-sized Italian town of Montecatini, a paradigmatic case where the rail network is inextricably linked with the fabric of the city. Results of the tests are reported and discussed.

Keywords: Intelligent Transportation Systems (ITS), railway, railroad crossing, smart camera networks, radar obstacle detection, real-time traffic optimization, IoT, ETSI M2M, transport safety

Procedia PDF Downloads 497
2666 Automatic Flood Prediction Using Rainfall Runoff Model in Moravian-Silesian Region

Authors: B. Sir, M. Podhoranyi, S. Kuchar, T. Kocyan

Abstract:

Rainfall-runoff models play important role in hydrological predictions. However, the model is only one part of the process for creation of flood prediction. The aim of this paper is to show the process of successful prediction for flood event (May 15–May 18 2014). The prediction was performed by rainfall runoff model HEC–HMS, one of the models computed within Floreon+ system. The paper briefly evaluates the results of automatic hydrologic prediction on the river Olše catchment and its gages Český Těšín and Věřňovice.

Keywords: flood, HEC-HMS, prediction, rainfall, runoff

Procedia PDF Downloads 395
2665 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic Kidney Disease, Linear Regression, Microfluidics, Urinary Albumin

Procedia PDF Downloads 136
2664 Determination of Neighbor Node in Consideration of the Imaging Range of Cameras in Automatic Human Tracking System

Authors: Kozo Tanigawa, Tappei Yotsumoto, Kenichi Takahashi, Takao Kawamura, Kazunori Sugahara

Abstract:

An automatic human tracking system using mobile agent technology is realized because a mobile agent moves in accordance with a migration of a target person. In this paper, we propose a method for determining the neighbor node in consideration of the imaging range of cameras.

Keywords: human tracking, mobile agent, Pan/Tilt/Zoom, neighbor relation

Procedia PDF Downloads 516
2663 The Fabrication of Stress Sensing Based on Artificial Antibodies to Cortisol by Molecular Imprinted Polymer

Authors: Supannika Klangphukhiew, Roongnapa Srichana, Rina Patramanon

Abstract:

Cortisol has been used as a well-known commercial stress biomarker. A homeostasis response to psychological stress is indicated by an increased level of cortisol produced in hypothalamus-pituitary-adrenal (HPA) axis. Chronic psychological stress contributing to the high level of cortisol relates to several health problems. In this study, the cortisol biosensor was fabricated that mimicked the natural receptors. The artificial antibodies were prepared using molecular imprinted polymer technique that can imitate the performance of natural anti-cortisol antibody with high stability. Cortisol-molecular imprinted polymer (cortisol-MIP) was obtained using the multi-step swelling and polymerization protocol with cortisol as a target molecule combining methacrylic acid:acrylamide (2:1) with bisacryloyl-1,2-dihydroxy-1,2-ethylenediamine and ethylenedioxy-N-methylamphetamine as cross-linkers. Cortisol-MIP was integrated to the sensor. It was coated on the disposable screen-printed carbon electrode (SPCE) for portable electrochemical analysis. The physical properties of Cortisol-MIP were characterized by means of electron microscope techniques. The binding characteristics were evaluated via covalent patterns changing in FTIR spectra which were related to voltammetry response. The performance of cortisol-MIP modified SPCE was investigated in terms of detection range, high selectivity with a detection limit of 1.28 ng/ml. The disposable cortisol biosensor represented an application of MIP technique to recognize steroids according to their structures with feasibility and cost-effectiveness that can be developed to use in point-of-care.

Keywords: stress biomarker, cortisol, molecular imprinted polymer, screen-printed carbon electrode

Procedia PDF Downloads 273
2662 Use of a Chagas Urine Nanoparticle Test (Chunap) to Correlate with Parasitemia Levels in T. cruzi/HIV Co-Infected Patients

Authors: Yagahira E. Castro-Sesquen, Robert H. Gilman, Carolina Mejia, Daniel E. Clark, Jeong Choi, Melissa J. Reimer-Mcatee, Rocio Castro, Jorge Flores, Edward Valencia-Ayala, Faustino Torrico, Ricardo Castillo-Neyra, Lance Liotta, Caryn Bern, Alessandra Luchini

Abstract:

Early diagnosis of reactivation of Chagas disease in HIV patients could be lifesaving; however, in Latin American the diagnosis is performed by detection of parasitemia by microscopy which lacks sensitivity. To evaluate if levels of T. cruzi antigens in urine determined by Chunap (Chagas urine nanoparticle test) are correlated with parasitemia levels in T. cruzi/HIV co-infected patients. T. cruzi antigens in urine of HIV patients (N=55: 31 T. cruzi infected and 24 T. cruzi serology negative) were concentrated using hydrogel particles and quantified by Western Blot and a calibration curve. The percentage of Chagas positive patients determined by Chunap compared to blood microscopy, qPCR, and ELISA was 100% (6/6), 95% (18/19) and 74% (23/31), respectively. Chunap specificity was 91.7%. Linear regression analysis demonstrated a direct relationship between parasitemia levels (determined by qPCR) and urine T. cruzi antigen concentrations (p<0.001). A cut-off of > 105 pg was chosen to determine patients with reactivation of Chagas disease (6/6). Urine antigen concentration was significantly higher among patients with CD4+ lymphocyte counts below 200/mL (p=0.045). Chunap shows potential for early detection of reactivation and with appropriate adaptation can be used for monitoring Chagas disease status in T. cruzi/HIV co-infected patients.

Keywords: antigenuria, Chagas disease, Chunap, nanoparticles, parasitemia, poly N-isopropylacrylamide (NIPAm)/trypan blue particles (polyNIPAm/TB), reactivation of Chagas disease.

Procedia PDF Downloads 377
2661 Development of a Direct Immunoassay for Human Ferritin Using Diffraction-Based Sensing Method

Authors: Joel Ballesteros, Harriet Jane Caleja, Florian Del Mundo, Cherrie Pascual

Abstract:

Diffraction-based sensing was utilized in the quantification of human ferritin in blood serum to provide an alternative to label-based immunoassays currently used in clinical diagnostics and researches. The diffraction intensity was measured by the diffractive optics technology or dotLab™ system. Two methods were evaluated in this study: direct immunoassay and direct sandwich immunoassay. In the direct immunoassay, human ferritin was captured by human ferritin antibodies immobilized on an avidin-coated sensor while the direct sandwich immunoassay had an additional step for the binding of a detector human ferritin antibody on the analyte complex. Both methods were repeatable with coefficient of variation values below 15%. The direct sandwich immunoassay had a linear response from 10 to 500 ng/mL which is wider than the 100-500 ng/mL of the direct immunoassay. The direct sandwich immunoassay also has a higher calibration sensitivity with value 0.002 Diffractive Intensity (ng mL-1)-1) compared to the 0.004 Diffractive Intensity (ng mL-1)-1 of the direct immunoassay. The limit of detection and limit of quantification values of the direct immunoassay were found to be 29 ng/mL and 98 ng/mL, respectively, while the direct sandwich immunoassay has a limit of detection (LOD) of 2.5 ng/mL and a limit of quantification (LOQ) of 8.2 ng/mL. In terms of accuracy, the direct immunoassay had a percent recovery of 88.8-93.0% in PBS while the direct sandwich immunoassay had 94.1 to 97.2%. Based on the results, the direct sandwich immunoassay is a better diffraction-based immunoassay in terms of accuracy, LOD, LOQ, linear range, and sensitivity. The direct sandwich immunoassay was utilized in the determination of human ferritin in blood serum and the results are validated by Chemiluminescent Magnetic Immunoassay (CMIA). The calculated Pearson correlation coefficient was 0.995 and the p-values of the paired-sample t-test were less than 0.5 which show that the results of the direct sandwich immunoassay was comparable to that of CMIA and could be utilized as an alternative analytical method.

Keywords: biosensor, diffraction, ferritin, immunoassay

Procedia PDF Downloads 354
2660 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 30
2659 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 205
2658 Hardware-in-the-Loop Test for Automatic Voltage Regulator of Synchronous Condenser

Authors: Ha Thi Nguyen, Guangya Yang, Arne Hejde Nielsen, Peter Højgaard Jensen

Abstract:

Automatic voltage regulator (AVR) plays an important role in volt/var control of synchronous condenser (SC) in power systems. Test AVR performance in steady-state and dynamic conditions in real grid is expensive, low efficiency, and hard to achieve. To address this issue, we implement hardware-in-the-loop (HiL) test for the AVR of SC to test the steady-state and dynamic performances of AVR in different operating conditions. Startup procedure of the system and voltage set point changes are studied to evaluate the AVR hardware response. Overexcitation, underexcitation, and AVR set point loss are tested to compare the performance of SC with the AVR hardware and that of simulation. The comparative results demonstrate how AVR will work in a real system. The results show HiL test is an effective approach for testing devices before deployment and is able to parameterize the controller with lower cost, higher efficiency, and more flexibility.

Keywords: automatic voltage regulator, hardware-in-the-loop, synchronous condenser, real time digital simulator

Procedia PDF Downloads 251
2657 Contactless Electromagnetic Detection of Stress Fluctuations in Steel Elements

Authors: M. A. García, J. Vinolas, A. Hernando

Abstract:

Steel is nowadays one of the most important structural materials because of its outstanding mechanical properties. Therefore, in order to look for a sustainable economic model and to optimize the use of extensive resources, new methods to monitor and prevent failure of steel-based facilities are required. The classical mechanical tests, as for instance building tasting, are invasive and destructive. Moreover, for facilities where the steel element is embedded, (as reinforced concrete) these techniques are directly non applicable. Hence, non-invasive monitoring techniques to prevent failure, without altering the structural properties of the elements are required. Among them, electromagnetic methods are particularly suitable for non-invasive inspection of the mechanical state of steel-based elements. The magnetoelastic coupling effects induce a modification of the electromagnetic properties of an element upon applied stress. Since most steels are ferromagnetic because of their large Fe content, it is possible to inspect their structure and state in a non-invasive way. We present here a distinct electromagnetic method for contactless evaluation of internal stress in steel-based elements. In particular, this method relies on measuring the magnetic induction between two coils with the steel specimen in between them. We found that the alteration of electromagnetic properties of the steel specimen induced by applied stress-induced changes in the induction allowed us to detect stress well below half of the elastic limit of the material. Hence, it represents an outstanding non-invasive method to prevent failure in steel-based facilities. We here describe the theoretical model, present experimental results to validate it and finally we show a practical application for detection of stress and inhomogeneities in train railways.

Keywords: magnetoelastic, magnetic induction, mechanical stress, steel

Procedia PDF Downloads 50
2656 Flexible Ethylene-Propylene Copolymer Nanofibers Decorated with Ag Nanoparticles as Effective 3D Surface-Enhanced Raman Scattering Substrates

Authors: Yi Li, Rui Lu, Lianjun Wang

Abstract:

With the rapid development of chemical industry, the consumption of volatile organic compounds (VOCs) has increased extensively. In the process of VOCs production and application, plenty of them have been transferred to environment. As a result, it has led to pollution problems not only in soil and ground water but also to human beings. Thus, it is important to develop a sensitive and cost-effective analytical method for trace VOCs detection in environment. Surface-enhanced Raman Spectroscopy (SERS), as one of the most sensitive optical analytical technique with rapid response, pinpoint accuracy and noninvasive detection, has been widely used for ultratrace analysis. Based on the plasmon resonance on the nanoscale metallic surface, SERS technology can even detect single molecule due to abundant nanogaps (i.e. 'hot spots') on the nanosubstrate. In this work, a self-supported flexible silver nitrate (AgNO3)/ethylene-propylene copolymer (EPM) hybrid nanofibers was fabricated by electrospinning. After an in-situ chemical reduction using ice-cold sodium borohydride as reduction agent, numerous silver nanoparticles were formed on the nanofiber surface. By adjusting the reduction time and AgNO3 content, the morphology and dimension of silver nanoparticles could be controlled. According to the principles of solid-phase extraction, the hydrophobic substance is more likely to partition into the hydrophobic EPM membrane in an aqueous environment while water and other polar components are excluded from the analytes. By the enrichment of EPM fibers, the number of hydrophobic molecules located on the 'hot spots' generated from criss-crossed nanofibers is greatly increased, which further enhances SERS signal intensity. The as-prepared Ag/EPM hybrid nanofibers were first employed to detect common SERS probe molecule (p-aminothiophenol) with the detection limit down to 10-12 M, which demonstrated an excellent SERS performance. To further study the application of the fabricated substrate for monitoring hydrophobic substance in water, several typical VOCs, such as benzene, toluene and p-xylene, were selected as model compounds. The results showed that the characteristic peaks of these target analytes in the mixed aqueous solution could be distinguished even at a concentration of 10-6 M after multi-peaks gaussian fitting process, including C-H bending (850 cm-1), C-C ring stretching (1581 cm-1, 1600 cm-1) of benzene, C-H bending (844 cm-1 ,1151 cm-1), C-C ring stretching (1001 cm-1), CH3 bending vibration (1377 cm-1) of toluene, C-H bending (829 cm-1), C-C stretching (1614 cm-1) of p-xylene. The SERS substrate has remarkable advantages which combine the enrichment capacity from EPM and the Raman enhancement of Ag nanoparticles. Meanwhile, the huge specific surface area resulted from electrospinning is benificial to increase the number of adsoption sites and promotes 'hot spots' formation. In summary, this work provides powerful potential in rapid, on-site and accurate detection of trace VOCs using a portable Raman.

Keywords: electrospinning, ethylene-propylene copolymer, silver nanoparticles, SERS, VOCs

Procedia PDF Downloads 160
2655 Design of Bacterial Pathogens Identification System Based on Scattering of Laser Beam Light and Classification of Binned Plots

Authors: Mubashir Hussain, Mu Lv, Xiaohan Dong, Zhiyang Li, Bin Liu, Nongyue He

Abstract:

Detection and classification of microbes have a vast range of applications in biomedical engineering especially in detection, characterization, and quantification of bacterial contaminants. For identification of pathogens, different techniques are emerging in the field of biomedical engineering. Latest technology uses light scattering, capable of identifying different pathogens without any need for biochemical processing. Bacterial Pathogens Identification System (BPIS) which uses a laser beam, passes through the sample and light scatters off. An assembly of photodetectors surrounded by the sample at different angles to detect the scattering of light. The algorithm of the system consists of two parts: (a) Library files, and (b) Comparator. Library files contain data of known species of bacterial microbes in the form of binned plots, while comparator compares data of unknown sample with library files. Using collected data of unknown bacterial species, highest voltage values stored in the form of peaks and arranged in 3D histograms to find the frequency of occurrence. Resulting data compared with library files of known bacterial species. If sample data matching with any library file of known bacterial species, sample identified as a matched microbe. An experiment performed to identify three different bacteria particles: Enterococcus faecalis, Pseudomonas aeruginosa, and Escherichia coli. By applying algorithm using library files of given samples, results were compromising. This system is potentially applicable to several biomedical areas, especially those related to cell morphology.

Keywords: microbial identification, laser scattering, peak identification, binned plots classification

Procedia PDF Downloads 150
2654 Ultrafiltration Process Intensification for Municipal Wastewater Reuse: Water Quality, Optimization of Operating Conditions and Fouling Management

Authors: J. Yang, M. Monnot, T. Eljaddi, L. Simonian, L. Ercolei, P. Moulin

Abstract:

The application of membrane technology to wastewater treatment has expanded rapidly under increasing stringent legislation and environmental protection requirements. At the same time, the water resource is becoming precious, and water reuse has gained popularity. Particularly, ultrafiltration (UF) is a very promising technology for water reuse as it can retain organic matters, suspended solids, colloids, and microorganisms. Nevertheless, few studies dealing with operating optimization of UF as a tertiary treatment for water reuse on a semi-industrial scale appear in the literature. Therefore, this study aims to explore the permeate water quality and to optimize operating parameters (maximizing productivity and minimizing irreversible fouling) through the operation of a UF pilot plant under real conditions. The fully automatic semi-industrial UF pilot plant with periodic classic backwashes (CB) and air backwashes (AB) was set up to filtrate the secondary effluent of an urban wastewater treatment plant (WWTP) in France. In this plant, the secondary treatment consists of a conventional activated sludge process followed by a sedimentation tank. The UF process was thus defined as a tertiary treatment and was operated under constant flux. It is important to note that a combination of CB and chlorinated AB was used for better fouling management. The 200 kDa hollow fiber membrane was used in the UF module, with an initial permeability (for WWTP outlet water) of 600 L·m-2·h⁻¹·bar⁻¹ and a total filtration surface of 9 m². Fifteen filtration conditions with different fluxes, filtration times, and air backwash frequencies were operated for more than 40 hours of each to observe their hydraulic filtration performances. Through comparison, the best sustainable condition was flux at 60 L·h⁻¹·m⁻², filtration time at 60 min, and backwash frequency of 1 AB every 3 CBs. The optimized condition stands out from the others with > 92% water recovery rates, better irreversible fouling control, stable permeability variation, efficient backwash reversibility (80% for CB and 150% for AB), and no chemical washing occurrence in 40h’s filtration. For all tested conditions, the permeate water quality met the water reuse guidelines of the World Health Organization (WHO), French standards, and the regulation of the European Parliament adopted in May 2020, setting minimum requirements for water reuse in agriculture. In permeate: the total suspended solids, biochemical oxygen demand, and turbidity were decreased to < 2 mg·L-1, ≤ 10 mg·L⁻¹, < 0.5 NTU respectively; the Escherichia coli and Enterococci were > 5 log removal reduction, the other required microorganisms’ analysis were below the detection limits. Additionally, because of the COVID-19 pandemic, coronavirus SARS-CoV-2 was measured in raw wastewater of WWTP, UF feed, and UF permeate in November 2020. As a result, the raw wastewater was tested positive above the detection limit but below the quantification limit. Interestingly, the UF feed and UF permeate were tested negative to SARS-CoV-2 by these PCR assays. In summary, this work confirms the great interest in UF as intensified tertiary treatment for water reuse and gives operational indications for future industrial-scale production of reclaimed water.

Keywords: semi-industrial UF pilot plant, water reuse, fouling management, coronavirus

Procedia PDF Downloads 114
2653 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: data compression, ultrasonic communication, guided waves, FEM analysis

Procedia PDF Downloads 124
2652 From Modeling of Data Structures towards Automatic Programs Generating

Authors: Valentin P. Velikov

Abstract:

Automatic program generation saves time, human resources, and allows receiving syntactically clear and logically correct modules. The 4-th generation programming languages are related to drawing the data and the processes of the subject area, as well as, to obtain a frame of the respective information system. The application can be separated in interface and business logic. That means, for an interactive generation of the needed system to be used an already existing toolkit or to be created a new one.

Keywords: computer science, graphical user interface, user dialog interface, dialog frames, data modeling, subject area modeling

Procedia PDF Downloads 305
2651 Array Type Miniaturized Ultrasonic Sensors for Detecting Sinkhole in the City

Authors: Won Young Choi, Kwan Kyu Park

Abstract:

Recently, the road depression happening in the urban area is different from the cause of the sink hole and the generation mechanism occurring in the limestone area. The main cause of sinkholes occurring in the city center is the loss of soil due to the damage of old underground buried materials and groundwater discharge due to large underground excavation works. The method of detecting the sinkhole in the urban area is mostly using the Ground Penetration Radar (GPR). However, it is challenging to implement compact system and detecting watery state since it is based on electromagnetic waves. Although many ultrasonic underground detection studies have been conducted, near-ground detection (several tens of cm to several meters) has been developed for bulk systems using geophones as a receiver. The goal of this work is to fabricate a miniaturized sinkhole detecting system based on low-cost ultrasonic transducers of 40 kHz resonant frequency with high transmission pressure and receiving sensitivity. Motived by biomedical ultrasonic imaging methods, we detect air layers below the ground such as asphalt through the pulse-echo method. To improve image quality using multi-channel, linear array system is implemented, and image is acquired by classical synthetic aperture imaging method. We present the successful feasibility test of multi-channel sinkhole detector based on ultrasonic transducer. In this work, we presented and analyzed image results which are imaged by single channel pulse-echo imaging, synthetic aperture imaging.

Keywords: road depression, sinkhole, synthetic aperture imaging, ultrasonic transducer

Procedia PDF Downloads 144