Search results for: aerial imaging and detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4900

Search results for: aerial imaging and detection

3850 Diversity Indices as a Tool for Evaluating Quality of Water Ways

Authors: Khadra Ahmed, Khaled Kheireldin

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: planktons, diversity indices, water quality index, water ways

Procedia PDF Downloads 518
3849 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks

Authors: Tesfaye Mengistu

Abstract:

Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.

Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net

Procedia PDF Downloads 112
3848 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 169
3847 Vehicular Speed Detection Camera System Using Video Stream

Authors: C. A. Anser Pasha

Abstract:

In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene.

Keywords: radar, image processing, detection, tracking, segmentation

Procedia PDF Downloads 467
3846 Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery

Authors: S. Benkraouda, Z. Djelloul-Khedda, B. Yagoubi

Abstract:

we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.

Keywords: forest fire, forest fire detection, satellite image, normal distribution, theoretical gaussian model, thermal infrared matrix image

Procedia PDF Downloads 142
3845 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 161
3844 Ultra Wideband Breast Cancer Detection by Using SAR for Indication the Tumor Location

Authors: Wittawat Wasusathien, Samran Santalunai, Thanaset Thosdeekoraphat, Chanchai Thongsopa

Abstract:

This paper presents breast cancer detection by observing the specific absorption rate (SAR) intensity for identification tumor location, the tumor is identified in coordinates (x,y,z) system. We examined the frequency between 4-8 GHz to look for the most appropriate frequency. Results are simulated in frequency 4-8 GHz, the model overview include normal breast with 50 mm radian, 5 mm diameter of tumor, and ultra wideband (UWB) bowtie antenna. The models are created and simulated in CST Microwave Studio. For this simulation, we changed antenna to 5 location around the breast, the tumor can be detected when an antenna is close to the tumor location, which the coordinate of maximum SAR is approximated the tumor location. For reliable, we experiment by random tumor location to 3 position in the same size of tumor and simulation the result again by varying the antenna position in 5 position again, and it also detectable the tumor position from the antenna that nearby tumor position by maximum value of SAR, which it can be detected the tumor with precision in all frequency between 4-8 GHz.

Keywords: specific absorption rate (SAR), ultra wideband (UWB), coordinates, cancer detection

Procedia PDF Downloads 403
3843 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 74
3842 Topology Optimization Design of Transmission Structure in Flapping-Wing Micro Aerial Vehicle via 3D Printing

Authors: Zuyong Chen, Jianghao Wu, Yanlai Zhang

Abstract:

Flapping-wing micro aerial vehicle (FMAV) is a new type of aircraft by mimicking the flying behavior to that of small birds or insects. Comparing to the traditional fixed wing or rotor-type aircraft, FMAV only needs to control the motion of flapping wings, by changing the size and direction of lift to control the flight attitude. Therefore, its transmission system should be designed very compact. Lightweight design can effectively extend its endurance time, while engineering experience alone is difficult to simultaneously meet the requirements of FMAV for structural strength and quality. Current researches still lack the guidance of considering nonlinear factors of 3D printing material when carrying out topology optimization, especially for the tiny FMAV transmission system. The coupling of non-linear material properties and non-linear contact behaviors of FMAV transmission system is a great challenge to the reliability of the topology optimization result. In this paper, topology optimization design based on FEA solver package Altair Optistruct for the transmission system of FMAV manufactured by 3D Printing was carried out. Firstly, the isotropic constitutive behavior of the Ultraviolet (UV) Cureable Resin used to fabricate the structure of FMAV was evaluated and confirmed through tensile test. Secondly, a numerical computation model describing the mechanical behavior of FMAV transmission structure was established and verified by experiments. Then topology optimization modeling method considering non-linear factors were presented, and optimization results were verified by dynamic simulation and experiments. Finally, detail discussions of different load status and constraints were carried out to explore the leading factors affecting the optimization results. The contributions drawn from this article helpful for guiding the lightweight design of FMAV are summarizing as follow; first, a dynamic simulation modeling method used to obtain the load status is presented. Second, verification method of optimized results considering non-linear factors is introduced. Third, based on or can achieve a better weight reduction effect and improve the computational efficiency rather than taking multi-states into account. Fourth, basing on makes for improving the ability to resist bending deformation. Fifth, constraint of displacement helps to improve the structural stiffness of optimized result. Results and engineering guidance in this paper may shed lights on the structural optimization and light-weight design for future advanced FMAV.

Keywords: flapping-wing micro aerial vehicle, 3d printing, topology optimization, finite element analysis, experiment

Procedia PDF Downloads 169
3841 One-Step Synthesis of Fluorescent Carbon Dots in a Green Way as Effective Fluorescent Probes for Detection of Iron Ions and pH Value

Authors: Mostafa Ghasemi, Andrew Urquhart

Abstract:

In this study, fluorescent carbon dots (CDs) were synthesized in a green way using a one-step hydrothermal method. Carbon dots are carbon-based nanomaterials with a size of less than 10 nm, unique structure, and excellent properties such as low toxicity, good biocompatibility, tunable fluorescence, excellent photostability, and easy functionalization. These properties make them a good candidate to use in different fields such as biological sensing, photocatalysis, photodynamic, and drug delivery. Fourier transformed infrared (FTIR) spectra approved OH/NH groups on the surface of the as-synthesized CDs, and UV-vis spectra showed excellent fluorescence quenching effect of Fe (III) ion on the as-synthesized CDs with high selectivity detection compared with other metal ions. The probe showed a linear response concentration range (0–2.0 mM) to Fe (III) ion, and the limit of detection was calculated to be about 0.50 μM. In addition, CDs also showed good sensitivity to the pH value in the range from 2 to 14, indicating great potential as a pH sensor.

Keywords: carbon dots, fluorescence, pH sensing, metal ions sensor

Procedia PDF Downloads 75
3840 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 374
3839 Architectural Adaptation for Road Humps Detection in Adverse Light Scenario

Authors: Padmini S. Navalgund, Manasi Naik, Ujwala Patil

Abstract:

Road hump is a semi-cylindrical elevation on the road made across specific locations of the road. The vehicle needs to maneuver the hump by reducing the speed to avoid car damage and pass over the road hump safely. Road Humps on road surfaces, if identified in advance, help to maintain the security and stability of vehicles, especially in adverse visibility conditions, viz. night scenarios. We have proposed a deep learning architecture adaptation by implementing the MISH activation function and developing a new classification loss function called "Effective Focal Loss" for Indian road humps detection in adverse light scenarios. We captured images comprising of marked and unmarked road humps from two different types of cameras across South India to build a heterogeneous dataset. A heterogeneous dataset enabled the algorithm to train and improve the accuracy of detection. The images were pre-processed, annotated for two classes viz, marked hump and unmarked hump. The dataset from these images was used to train the single-stage object detection algorithm. We utilised an algorithm to synthetically generate reduced visible road humps scenarios. We observed that our proposed framework effectively detected the marked and unmarked hump in the images in clear and ad-verse light environments. This architectural adaptation sets up an option for early detection of Indian road humps in reduced visibility conditions, thereby enhancing the autonomous driving technology to handle a wider range of real-world scenarios.

Keywords: Indian road hump, reduced visibility condition, low light condition, adverse light condition, marked hump, unmarked hump, YOLOv9

Procedia PDF Downloads 24
3838 Synthesis and Characterization of CNPs Coated Carbon Nanorods for Cd2+ Ion Adsorption from Industrial Waste Water and Reusable for Latent Fingerprint Detection

Authors: Bienvenu Gael Fouda Mbanga

Abstract:

This study reports a new approach of preparation of carbon nanoparticles coated cerium oxide nanorods (CNPs/CeONRs) nanocomposite and reusing the spent adsorbent of Cd2+- CNPs/CeONRs nanocomposite for latent fingerprint detection (LFP) after removing Cd2+ ions from aqueous solution. CNPs/CeONRs nanocomposite was prepared by using CNPs and CeONRs with adsorption processes. The prepared nanocomposite was then characterized by using UV-visible spectroscopy (UV-visible), Fourier transforms infrared spectroscopy (FTIR), X-ray diffraction pattern (XRD), scanning electron microscope (SEM), Transmission electron microscopy (TEM), Energy-dispersive X-ray spectroscopy (EDS), Zeta potential, X-ray photoelectron spectroscopy (XPS). The average size of the CNPs was 7.84nm. The synthesized CNPs/CeONRs nanocomposite has proven to be a good adsorbent for Cd2+ removal from water with optimum pH 8, dosage 0. 5 g / L. The results were best described by the Langmuir model, which indicated a linear fit (R2 = 0.8539-0.9969). The adsorption capacity of CNPs/CeONRs nanocomposite showed the best removal of Cd2+ ions with qm = (32.28-59.92 mg/g), when compared to previous reports. This adsorption followed pseudo-second order kinetics and intra particle diffusion processes. ∆G and ∆H values indicated spontaneity at high temperature (40oC) and the endothermic nature of the adsorption process. CNPs/CeONRs nanocomposite therefore showed potential as an effective adsorbent. Furthermore, the metal loaded on the adsorbent Cd2+- CNPs/CeONRs has proven to be sensitive and selective for LFP detection on various porous substrates. Hence Cd2+-CNPs/CeONRs nanocomposite can be reused as a good fingerprint labelling agent in LFP detection so as to avoid secondary environmental pollution by disposal of the spent adsorbent.

Keywords: Cd2+-CNPs/CeONRs nanocomposite, cadmium adsorption, isotherm, kinetics, thermodynamics, reusable for latent fingerprint detection

Procedia PDF Downloads 121
3837 Probing Neuron Mechanics with a Micropipette Force Sensor

Authors: Madeleine Anthonisen, M. Hussain Sangji, G. Monserratt Lopez-Ayon, Margaret Magdesian, Peter Grutter

Abstract:

Advances in micromanipulation techniques and real-time particle tracking with nanometer resolution have enabled biological force measurements at scales relevant to neuron mechanics. An approach to precisely control and maneuver neurite-tethered polystyrene beads is presented. Analogous to an Atomic Force Microscope (AFM), this multi-purpose platform is a force sensor with imaging acquisition and manipulation capabilities. A mechanical probe composed of a micropipette with its tip fixed to a functionalized bead is used to incite the formation of a neurite in a sample of rat hippocampal neurons while simultaneously measuring the tension in said neurite as the sample is pulled away from the beaded tip. With optical imaging methods, a force resolution of 12 pN is achieved. Moreover, the advantages of this technique over alternatives such as AFM, namely ease of manipulation which ultimately allows higher throughput investigation of the mechanical properties of neurons, is demonstrated.

Keywords: axonal growth, axonal guidance, force probe, pipette micromanipulation, neurite tension, neuron mechanics

Procedia PDF Downloads 367
3836 Automatic Vowel and Consonant's Target Formant Frequency Detection

Authors: Othmane Bouferroum, Malika Boudraa

Abstract:

In this study, a dual exponential model for CV formant transition is derived from locus theory of speech perception. Then, an algorithm for automatic vowel and consonant’s target formant frequency detection is developed and tested on real speech. The results show that vowels and consonants are detected through transitions rather than their small stable portions. Also, vowel reduction is clearly observed in our data. These results are confirmed by the observations made in perceptual experiments in the literature.

Keywords: acoustic invariance, coarticulation, formant transition, locus equation

Procedia PDF Downloads 270
3835 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation

Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga

Abstract:

Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.

Keywords: classification, coastline, color, sea-land segmentation

Procedia PDF Downloads 247
3834 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 252
3833 Real Time Detection of Application Layer DDos Attack Using Log Based Collaborative Intrusion Detection System

Authors: Farheen Tabassum, Shoab Ahmed Khan

Abstract:

The brutality of attacks on networks and decisive infrastructures are on the climb over recent years and appears to continue to do so. Distributed Denial of service attack is the most prevalent and easy attack on the availability of a service due to the easy availability of large botnet computers at cheap price and the general lack of protection against these attacks. Application layer DDoS attack is DDoS attack that is targeted on wed server, application server or database server. These types of attacks are much more sophisticated and challenging as they get around most conventional network security devices because attack traffic often impersonate normal traffic and cannot be recognized by network layer anomalies. Conventional techniques of single-hosted security systems are becoming gradually less effective in the face of such complicated and synchronized multi-front attacks. In order to protect from such attacks and intrusion, corporation among all network devices is essential. To overcome this issue, a collaborative intrusion detection system (CIDS) is proposed in which multiple network devices share valuable information to identify attacks, as a single device might not be capable to sense any malevolent action on its own. So it helps us to take decision after analyzing the information collected from different sources. This novel attack detection technique helps to detect seemingly benign packets that target the availability of the critical infrastructure, and the proposed solution methodology shall enable the incident response teams to detect and react to DDoS attacks at the earliest stage to ensure that the uptime of the service remain unaffected. Experimental evaluation shows that the proposed collaborative detection approach is much more effective and efficient than the previous approaches.

Keywords: Distributed Denial-of-Service (DDoS), Collaborative Intrusion Detection System (CIDS), Slowloris, OSSIM (Open Source Security Information Management tool), OSSEC HIDS

Procedia PDF Downloads 354
3832 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran

Authors: M. Ahmadi, M. Kafil, H. Ebrahimi

Abstract:

Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.

Keywords: broken bar, condition monitoring, diagnostics, empirical mode decomposition, fourier transform, wavelet transform

Procedia PDF Downloads 150
3831 Pyramidal Lucas-Kanade Optical Flow Based Moving Object Detection in Dynamic Scenes

Authors: Hyojin Lim, Cuong Nguyen Khac, Yeongyu Choi, Ho-Youl Jung

Abstract:

In this paper, we propose a simple moving object detection, which is based on motion vectors obtained from pyramidal Lucas-Kanade optical flow. The proposed method detects moving objects such as pedestrians, the other vehicles and some obstacles at the front-side of the host vehicle, and it can provide the warning to the driver. Motion vectors are obtained by using pyramidal Lucas-Kanade optical flow, and some outliers are eliminated by comparing the amplitude of each vector with the pre-defined threshold value. The background model is obtained by calculating the mean and the variance of the amplitude of recent motion vectors in the rectangular shaped local region called the cell. The model is applied as the reference to classify motion vectors of moving objects and those of background. Motion vectors are clustered to rectangular regions by using the unsupervised clustering K-means algorithm. Labeling method is applied to label groups which is close to each other, using by distance between each center points of rectangular. Through the simulations tested on four kinds of scenarios such as approaching motorbike, vehicle, and pedestrians to host vehicle, we prove that the proposed is simple but efficient for moving object detection in parking lots.

Keywords: moving object detection, dynamic scene, optical flow, pyramidal optical flow

Procedia PDF Downloads 349
3830 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 114
3829 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 73
3828 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 267
3827 Survey of Intrusion Detection Systems and Their Assessment of the Internet of Things

Authors: James Kaweesa

Abstract:

The Internet of Things (IoT) has become a critical component of modern technology, enabling the connection of numerous devices to the internet. The interconnected nature of IoT devices, along with their heterogeneous and resource-constrained nature, makes them vulnerable to various types of attacks, such as malware, denial-of-service attacks, and network scanning. Intrusion Detection Systems (IDSs) are a key mechanism for protecting IoT networks and from attacks by identifying and alerting administrators to suspicious activities. In this review, the paper will discuss the different types of IDSs available for IoT systems and evaluate their effectiveness in detecting and preventing attacks. Also, examine the various evaluation methods used to assess the performance of IDSs and the challenges associated with evaluating them in IoT environments. The review will highlight the need for effective and efficient IDSs that can cope with the unique characteristics of IoT networks, including their heterogeneity, dynamic topology, and resource constraints. The paper will conclude by indicating where further research is needed to develop IDSs that can address these challenges and effectively protect IoT systems from cyber threats.

Keywords: cyber-threats, iot, intrusion detection system, networks

Procedia PDF Downloads 80
3826 An in Situ Dna Content Detection Enabled by Organic Long-persistent Luminescence Materials with Tunable Afterglow-time in Water and Air

Authors: Desissa Yadeta Muleta

Abstract:

Purely organic long-persistent luminescence materials (OLPLMs) have been developed as emerging organic materials due to their simple production process, low preparation cost and better biocompatibilities. Notably, OLPLMs with afterglow-time-tunable long-persistent luminescence (LPL) characteristics enable higher-level protection applications and have great prospects in biological applications. The realization of these advanced performances depends on our ability to gradually tune LPL duration under ambient conditions, however, the strategies to achieve this are few due to the lack of unambiguous mechanisms. Here, we propose a two-step strategy to gradually tune LPL duration of OLPLMs over a wide range of seconds in water and air, by using derivatives as the guest and introducing a third-party material into the host-immobilized host–guest doping system. Based on this strategy, we develop an analysis method for deoxyribonucleic acid (DNA) content detection without DNA separation in aqueous samples, which circumvents the influence of the chromophore, fluorophore and other interferents in vivo, enabling a certain degree of in situ detection that is difficult to achieve using today’s methods. This work will expedite the development of afterglow-time-tunable OLPLMs and expand new horizons for their applications in data protection, bio-detection, and bio-sensing

Keywords: deoxyribonucliec acid, long persistent luminescent materials, water, air

Procedia PDF Downloads 76
3825 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing

Procedia PDF Downloads 188
3824 Highly Specific DNA-Aptamer-Based Electrochemical Biosensor for Mercury (II) and Lead (II) Ions Detection in Water Samples

Authors: H. Abu-Ali, A. Nabok, T. Smith

Abstract:

Aptamers are single-strand of DNA or RNA nucleotides sequence which is designed in vitro using selection process known as SELEX (systematic evolution of ligands by exponential enrichment) were developed for the selective detection of many toxic materials. In this work, we have developed an electrochemical biosensor for highly selective and sensitive detection of Hg2+ and Pb2+ using a specific aptamer probe (SAP) labelled with ferrocene (or methylene blue) in (5′) end and the thiol group at its (3′) termini, respectively. The SAP has a specific coil structure that matching with G-G for Pb2+ and T-T for Hg2+ interaction binding nucleotides ions, respectively. Aptamers were immobilized onto surface of screen-printed gold electrodes via SH groups; then the cyclic voltammograms were recorded in binding buffer with the addition of the above metal salts in different concentrations. The resulted values of anode current increase upon binding heavy metal ions to aptamers and analyte due to the presence of electrochemically active probe, i.e. ferrocene or methylene blue group. The correlation between the anodic current values and the concentrations of Hg2+ and Pb2+ ions has been established in this work. To the best of our knowledge, this is the first example of using a specific DNA aptamers for electrochemical detection of heavy metals. Each increase in concentration of 0.1 μM results in an increase in the anode current value by simple DC electrochemical test i.e (Cyclic Voltammetry), thus providing an easy way of determining Hg2+ and Pb2+concentration.

Keywords: aptamer, based, biosensor, DNA, electrochemical, highly, specific

Procedia PDF Downloads 159
3823 Detection of Cardiac Arrhythmia Using Principal Component Analysis and Xgboost Model

Authors: Sujay Kotwale, Ramasubba Reddy M.

Abstract:

Electrocardiogram (ECG) is a non-invasive technique used to study and analyze various heart diseases. Cardiac arrhythmia is a serious heart disease which leads to death of the patients, when left untreated. An early-time detection of cardiac arrhythmia would help the doctors to do proper treatment of the heart. In the past, various algorithms and machine learning (ML) models were used to early-time detection of cardiac arrhythmia, but few of them have achieved better results. In order to improve the performance, this paper implements principal component analysis (PCA) along with XGBoost model. The PCA was implemented to the raw ECG signals which suppress redundancy information and extracted significant features. The obtained significant ECG features were fed into XGBoost model and the performance of the model was evaluated. In order to valid the proposed technique, raw ECG signals obtained from standard MIT-BIH database were employed for the analysis. The result shows that the performance of proposed method is superior to the several state-of-the-arts techniques.

Keywords: cardiac arrhythmia, electrocardiogram, principal component analysis, XGBoost

Procedia PDF Downloads 119
3822 Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing

Authors: McClain Thiel

Abstract:

Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions.

Keywords: monocular distancing, computer vision, facial analysis, 3D localization

Procedia PDF Downloads 139
3821 New Ethanol Method for Soft Tissue Imaging in Micro-CT

Authors: Matej Patzelt, Jan Dudak, Frantisek Krejci, Jan Zemlicka, Vladimir Musil, Jitka Riedlova, Viktor Sykora, Jana Mrzilkova, Petr Zach

Abstract:

Introduction: Micro-CT is well used for examination of bone structures and teeth. On the other hand visualization of the soft tissues is still limited. The goal of our study was to create a new fixation method for soft tissue imaging in micro-CT. Methodology: We used organs of 18 mice - heart, lungs, kidneys, liver and brain, which we fixated in ethanol after meticulous preparation. We fixated organs in different concentrations of ethanol and for different period of time. We used three types of ethanol concentration - 97%, 50% and ascending ethanol concentration (25%, 50%, 75%, 97% each for 12 hours). Fixated organs were scanned after 72 hours, 168 hours and 336 hours period of fixation. We scanned all specimens in micro-CT MARS (Medipix All Resolution System). Results: Ethanol method provided contrast enhancement in all studied organs in all used types of fixation. Fixation in 97% ethanol provided very fast fixation and the contrast among the tissues was visible already after 72 hours of fixation. Fixation for the period of 168 and 336 hours gave better details, especially in lung tissue, where alveoli were visualized. On the other hand, this type of fixation caused organs to petrify. Fixation in 50% ethanol provided best results in 336 hours fixation, details were visualized better than in 97% ethanol and samples were not as hard as in fixation in 97% ethanol. Best results were obtained in fixation in ascending ethanol concentration. All organs were visualized in great details, best-visualized organ was heart, where trabeculae and valves were visible. In this type of fixation, organs stayed soft for whole time. Conclusion: New ethanol method is a great option for soft tissue fixation as well as the method for enhancing contrast among tissues in organs. The best results were obtained with fixation of the organs in ascending ethanol concentration, the best visualized organ was the heart.

Keywords: x-ray imaging, small animals, ethanol, ex-vivo

Procedia PDF Downloads 321