Search results for: signal classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3693

Search results for: signal classification

2223 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube

Authors: Dan Kanmegne

Abstract:

Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.

Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification

Procedia PDF Downloads 145
2222 Comparative Study of sLASER and PRESS Techniques in Magnetic Resonance Spectroscopy of Normal Brain

Authors: Shin Ku Kim, Yun Ah Oh, Eun Hee Seo, Chang Min Dae, Yun Jung Bae

Abstract:

Objectives: The commonly used PRESS technique in magnetic resonance spectroscopy (MRS) has a limitation of incomplete water suppression. The recently developed sLASER technique is known for its improved effectiveness in suppressing water signal. However, no prior study has compared both sequences in a normal human brain. In this study, we firstly aimed to compare the performances of both techniques in brain MRS. Materials and methods: From January 2023 to July 2023, thirty healthy participants (mean age 38 years, 17 male, 13 female) without underlying neurological diseases were enrolled in this study. All participants underwent single-voxel MRS using both PRESS and sLASER techniques on 3T MRI. Two regions-of-interest were allocated in the left medial thalamus and left parietal white matter (WM) by a single reader. The SpectroView Analysis (SW5, Philips, Netherlands) provided automatic measurements, including signal-to-noise ratio (SNR) and peak_height of water, N-acetylaspartate (NAA)-water/Choline (Cho)-water/Creatine (Cr)-water ratios, and NAA-Cr/Cho-Cr ratios. The measurements from PRESS and sLASER techniques were compared using paired T-tests and Bland-Altman methods, and the variability was assessed using coefficients of variation (CV). Results: SNR and peak_heights of the water were significantly lower with sLASER compared to PRESS (left medial thalamus, sLASER SNR/peak_height 2092±475/328±85 vs. PRESS 2811±549/440±105); left parietal WM, 5422±1016/872±196 vs. 7152±1305/1150±278; all, P<0.001, respectively). Accordingly, NAA-water/Cho-water/Cr-water ratios and NAA-Cr/Cho-Cr ratios were significantly higher with sLASER than with PRESS (all, P< 0.001, respectively). The variabilities of NAA-water/Cho-water/Cr-water ratios and Cho-Cr ratio in the left medial thalamus were lower with sLASER than with PRESS (CV, sLASER vs. PRESS, 19.9 vs. 58.1/19.8 vs. 54.7/20.5 vs. 43.9 and 11.5 vs. 16.2) Conclusion: The sLASER technique demonstrated enhanced background water suppression, resulting in increased signals and reduced variability in brain metabolite measurements of MRS. Therefore, sLASER could offer a more precise and stable method for identifying brain metabolites.

Keywords: Magnetic resonance spectroscopy, Brain, sLASER, PRESS

Procedia PDF Downloads 46
2221 High-Frequency Acoustic Microscopy Imaging of Pellet/Cladding Interface in Nuclear Fuel Rods

Authors: H. Saikouk, D. Laux, Emmanuel Le Clézio, B. Lacroix, K. Audic, R. Largenton, E. Federici, G. Despaux

Abstract:

Pressurized Water Reactor (PWR) fuel rods are made of ceramic pellets (e.g. UO2 or (U,Pu) O2) assembled in a zirconium cladding tube. By design, an initial gap exists between these two elements. During irradiation, they both undergo transformations leading progressively to the closure of this gap. A local and non destructive examination of the pellet/cladding interface could constitute a useful help to identify the zones where the two materials are in contact, particularly at high burnups when a strong chemical bonding occurs under nominal operating conditions in PWR fuel rods. The evolution of the pellet/cladding bonding during irradiation is also an area of interest. In this context, the Institute of Electronic and Systems (IES- UMR CNRS 5214), in collaboration with the Alternative Energies and Atomic Energy Commission (CEA), is developing a high frequency acoustic microscope adapted to the control and imaging of the pellet/cladding interface with high resolution. Because the geometrical, chemical and mechanical nature of the contact interface is neither axially nor radially homogeneous, 2D images of this interface need to be acquired via this ultrasonic system with a highly performing processing signal and by means of controlled displacement of the sample rod along both its axis and its circumference. Modeling the multi-layer system (water, cladding, fuel etc.) is necessary in this present study and aims to take into account all the parameters that have an influence on the resolution of the acquired images. The first prototype of this microscope and the first results of the visualization of the inner face of the cladding will be presented in a poster in order to highlight the potentials of the system, whose final objective is to be introduced in the existing bench MEGAFOX dedicated to the non-destructive examination of irradiated fuel rods at LECA-STAR facility in CEA-Cadarache.

Keywords: high-frequency acoustic microscopy, multi-layer model, non-destructive testing, nuclear fuel rod, pellet/cladding interface, signal processing

Procedia PDF Downloads 191
2220 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 331
2219 Fault Diagnosis of Manufacturing Systems Using AntTreeStoch with Parameter Optimization by ACO

Authors: Ouahab Kadri, Leila Hayet Mouss

Abstract:

In this paper, we present three diagnostic modules for complex and dynamic systems. These modules are based on three ant colony algorithms, which are AntTreeStoch, Lumer & Faieta and Binary ant colony. We chose these algorithms for their simplicity and their wide application range. However, we cannot use these algorithms in their basement forms as they have several limitations. To use these algorithms in a diagnostic system, we have proposed three variants. We have tested these algorithms on datasets issued from two industrial systems, which are clinkering system and pasteurization system.

Keywords: ant colony algorithms, complex and dynamic systems, diagnosis, classification, optimization

Procedia PDF Downloads 298
2218 Vertical and Horizantal Distribution Patterns of Major and Trace Elements: Surface and Subsurface Sediments of Endhorheic Lake Acigol Basin, Denizli Turkey

Authors: M. Budakoglu, M. Karaman

Abstract:

Lake Acıgöl is located in area with limited influences from urban and industrial pollution sources, there is nevertheless a need to understand all potential lithological and anthropogenic sources of priority contaminants in this closed basin. This study discusses vertical and horizontal distribution pattern of major, trace elements of recent lake sediments to better understand their current geochemical analog with lithological units in the Lake Acıgöl basin. This study also provides reliable background levels for the region by the detailed surfaced lithological units data. The detail results of surface, subsurface and shallow core sediments from these relatively unperturbed ecosystems, highlight its importance as conservation area, despite the high-scale industrial salt production activity. While P2O5/TiO2 versus MgO/CaO classification diagram indicate magmatic and sedimentary origin of lake sediment, Log(SiO2/Al2O3) versus Log(Na2O/K2O) classification diagrams express lithological assemblages of shale, iron-shale, vacke and arkose. The plot between TiO2 vs. SiO2 and P2O5/TiO2 vs. MgO/CaO also supports the origin of the primary magma source. The average compositions of the 20 different lithological units used as a proxy for geochemical background in the study area. As expected from weathered rock materials, there is a large variation in the major element content for all analyzed lake samples. The A-CN-K and A-CNK-FM ternary diagrams were used to deduce weathering trends. Surface and subsurface sediments display an intense weathering history according to these ternary diagrams. The most of the sediments samples plot around UCC and TTG, suggesting a low to moderate weathering history for the provenance. The sediments plot in a region clearly suggesting relative similar contents in Al2O3, CaO, Na2O, and K2O from those of lithological samples.

Keywords: Lake Acıgöl, recent lake sediment, geochemical speciation of major and trace elements, heavy metals, Denizli, Turkey

Procedia PDF Downloads 411
2217 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation

Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton

Abstract:

Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.

Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication

Procedia PDF Downloads 171
2216 A Comprehensive Framework for Fraud Prevention and Customer Feedback Classification in E-Commerce

Authors: Samhita Mummadi, Sree Divya Nagalli, Harshini Vemuri, Saketh Charan Nakka, Sumesh K. J.

Abstract:

One of the most significant challenges faced by people in today’s digital era is an alarming increase in fraudulent activities on online platforms. The fascination with online shopping to avoid long queues in shopping malls, the availability of a variety of products, and home delivery of goods have paved the way for a rapid increase in vast online shopping platforms. This has had a major impact on increasing fraudulent activities as well. This loop of online shopping and transactions has paved the way for fraudulent users to commit fraud. For instance, consider a store that orders thousands of products all at once, but what’s fishy about this is the massive number of items purchased and their transactions turning out to be fraud, leading to a huge loss for the seller. Considering scenarios like these underscores the urgent need to introduce machine learning approaches to combat fraud in online shopping. By leveraging robust algorithms, namely KNN, Decision Trees, and Random Forest, which are highly effective in generating accurate results, this research endeavors to discern patterns indicative of fraudulent behavior within transactional data. Introducing a comprehensive solution to this problem in order to empower e-commerce administrators in timely fraud detection and prevention is the primary motive and the main focus. In addition to that, sentiment analysis is harnessed in the model so that the e-commerce admin can tailor to the customer’s and consumer’s concerns, feedback, and comments, allowing the admin to improve the user’s experience. The ultimate objective of this study is to ramp up online shopping platforms against fraud and ensure a safer shopping experience. This paper underscores a model accuracy of 84%. All the findings and observations that were noted during our work lay the groundwork for future advancements in the development of more resilient and adaptive fraud detection systems, which will become crucial as technologies continue to evolve.

Keywords: behavior analysis, feature selection, Fraudulent pattern recognition, imbalanced classification, transactional anomalies

Procedia PDF Downloads 27
2215 The Secrecy Capacity of the Semi-Deterministic Wiretap Channel with Three State Information

Authors: Mustafa El-Halabi

Abstract:

A general model of wiretap channel with states is considered, where the legitimate receiver and the wiretapper’s observations depend on three states S1, S2 and S3. State S1 is non-causally known to the encoder, S2 is known to the receiver, and S3 remains unknown. A secure coding scheme, based using structured-binning, is proposed, and it is shown to achieve the secrecy capacity when the signal at legitimate receiver is a deterministic function of the input.

Keywords: physical layer security, interference, side information, secrecy capacity

Procedia PDF Downloads 389
2214 Spatial Patterns of Urban Expansion in Kuwait City between 1989 and 2001

Authors: Saad Algharib, Jay Lee

Abstract:

Urbanization is a complex phenomenon that occurs during the city’s development from one form to another. In other words, it is the process when the activities in the land use/land cover change from rural to urban. Since the oil exploration, Kuwait City has been growing rapidly due to its urbanization and population growth by both natural growth and inward immigration. The main objective of this study is to detect changes in urban land use/land cover and to examine the changing spatial patterns of urban growth in and around Kuwait City between 1989 and 2001. In addition, this study also evaluates the spatial patterns of the changes detected and how they can be related to the spatial configuration of the city. Recently, the use of remote sensing and geographic information systems became very useful and important tools in urban studies because of the integration of them can allow and provide the analysts and planners to detect, monitor and analyze the urban growth in a region effectively. Moreover, both planners and users can predict the trends of the growth in urban areas in the future with remotely sensed and GIS data because they can be effectively updated with required precision levels. In order to identify the new urban areas between 1989 and 2001, the study uses satellite images of the study area and remote sensing technology for classifying these images. Unsupervised classification method was applied to classify images to land use and land cover data layers. After finishing the unsupervised classification method, GIS overlay function was applied to the classified images for detecting the locations and patterns of the new urban areas that developed during the study period. GIS was also utilized to evaluate the distribution of the spatial patterns. For example, Moran’s index was applied for all data inputs to examine the urban growth distribution. Furthermore, this study assesses if the spatial patterns and process of these changes take place in a random fashion or with certain identifiable trends. During the study period, the result of this study indicates that the urban growth has occurred and expanded 10% from 32.4% in 1989 to 42.4% in 2001. Also, the results revealed that the largest increase of the urban area occurred between the major highways after the forth ring road from the center of Kuwait City. Moreover, the spatial distribution of urban growth occurred in cluster manners.

Keywords: geographic information systems, remote sensing, urbanization, urban growth

Procedia PDF Downloads 171
2213 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation

Procedia PDF Downloads 319
2212 Normalized Compression Distance Based Scene Alteration Analysis of a Video

Authors: Lakshay Kharbanda, Aabhas Chauhan

Abstract:

In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.

Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error

Procedia PDF Downloads 340
2211 Recognition of Tifinagh Characters with Missing Parts Using Neural Network

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.

Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN

Procedia PDF Downloads 334
2210 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 252
2209 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 137
2208 Classification of Sturm-Liouville Problems at Infinity

Authors: Kishor J. shinde

Abstract:

We determine the values of k and p such that the Sturm-Liouville differential operator τu=-(d^2 u)/(dx^2) + kx^p u is in limit point case or limit circle case at infinity. In particular it is shown that τ is in the limit point case when (i) for p=2 and ∀k, (ii) for ∀p and k=0, (iii) for all p and k>0, (iv) for 0≤p≤2 and k<0, (v) for p<0 and k<0. τ is in the limit circle case when (i) for p>2 and k<0.

Keywords: limit point case, limit circle case, Sturm-Liouville, infinity

Procedia PDF Downloads 367
2207 Contribution to Improving the DFIG Control Using a Multi-Level Inverter

Authors: Imane El Karaoui, Mohammed Maaroufi, Hamid Chaikhy

Abstract:

Doubly Fed Induction Generator (DFIG) is one of the most reliable wind generator. Major problem in wind power generation is to generate Sinusoidal signal with very low THD on variable speed caused by inverter two levels used. This paper presents a multi-level inverter whose objective is to reduce the THD and the dimensions of the output filter. This work proposes a three-level NPC-type inverter, the results simulation are presented demonstrating the efficiency of the proposed inverter.

Keywords: DFIG, multilevel inverter, NPC inverter, THD, induction machine

Procedia PDF Downloads 249
2206 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values

Authors: Burçin Saltık, Levent Genç

Abstract:

In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.

Keywords: landsat 8 (OLI-TIRS), LST, LSWI, LULC, NDVI, rice

Procedia PDF Downloads 228
2205 Low-Power Digital Filters Design Using a Bypassing Technique

Authors: Thiago Brito Bezerra

Abstract:

This paper presents a novel approach to reduce power consumption of digital filters based on dynamic bypassing of partial products in their multipliers. The bypassing elements incorporated into the multiplier hardware eliminate redundant signal transitions, which appear within the carry-save adders when the partial product is zero. This technique reduces the power consumption by around 20%. The circuit implementation was made using the AMS 0.18 um technology. The bypassing technique applied to the circuits is outlined.

Keywords: digital filter, low-power, bypassing technique, low-pass filter

Procedia PDF Downloads 382
2204 Pulse Generator with Constant Pulse Width

Authors: Rozita Borhan, Hanif Che Lah, Wee Leong Son

Abstract:

This paper is about method to produce a stable and accurate constant output pulse width regardless of the amplitude, period and pulse width variation of the input signal source. The pulse generated is usually being used in numerous applications as the reference input source to other circuits in the system. Therefore, it is crucial to produce a clean and constant pulse width to make sure the system is working accurately as expected.

Keywords: amplitude, Constant Pulse Width, frequency divider, pulse generator

Procedia PDF Downloads 394
2203 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 150
2202 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 366
2201 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 160
2200 Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data

Authors: Sašo Pečnik, Borut Žalik

Abstract:

This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders.

Keywords: filtering, graphics, level-of-details, LiDAR, real-time visualization

Procedia PDF Downloads 308
2199 The Impact of Insider Trading on Open Market Share Repurchase: A Study in Indian Context

Authors: Sarthak Kumar Jena, Chandra Sekhar Mishra, Prabina Rajib

Abstract:

Purpose: This paper aims to derive undervaluation signal from the insiders trading of Indian companies where the ownership is complex and concentrated, investors protection is weak, and the insider rules and regulations are not stringent like developed country. This study examines the relationship between insider trading with short term and long term abnormal return. The study also examines the relationship between insider trading and the actual share repurchase by the firm. Methodology: A sample of 78 companies over the period 2008-2013 are analyzed in the study due to not availability of insider data in Indian context. For preliminary analysis T-test and Wilcoxon rank sum test is used to find the difference between the insider trading before and after the share repurchase announcement. Tobit model is used to find out whether insider trading influence shares repurchase decisions or not. Return on the basis of market model and buy hold are calculated in the previous year and the following year of share repurchase announcement. Findings: The paper finds that insider trading around share repurchase is more than control firms and there is positive and significant difference in insider buying between the previous year of share buyback announcement and the following year of buyback announcement. Insider buying before share repurchase announcement has a positive influence on share repurchase decisions. We find insider buying has a positive and significant relationship with announcement return, whereas insider selling has a negative significant relationship with announcement return. Actual share repurchase and program completion also depend on insider trading before share repurchase. Research limitation: The study is constrained by the small sample size, so the results should be viewed by keeping this limitation in mind. Originality: The paper is to our best knowledge the first study based on Indian context to extend the insider trading literature to share repurchase event and examine insider trading to find out undervaluation signal associated with insider buying.

Keywords: insider trading, buyback, open market share repurchase, signalling

Procedia PDF Downloads 199
2198 Active Features Determination: A Unified Framework

Authors: Meenal Badki

Abstract:

We address the issue of active feature determination, where the objective is to determine the set of examples on which additional data (such as lab tests) needs to be gathered, given a large number of examples with some features (such as demographics) and some examples with all the features (such as the complete Electronic Health Record). We note that certain features may be more costly, unique, or laborious to gather. Our proposal is a general active learning approach that is independent of classifiers and similarity metrics. It allows us to identify examples that differ from the full data set and obtain all the features for the examples that match. Our comprehensive evaluation shows the efficacy of this approach, which is driven by four authentic clinical tasks.

Keywords: feature determination, classification, active learning, sample-efficiency

Procedia PDF Downloads 75
2197 Use of Fractal Geometry in Machine Learning

Authors: Fuad M. Alkoot

Abstract:

The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy.

Keywords: fractal geometry, machine learning, classifier, fractal dimension

Procedia PDF Downloads 216
2196 GATA3-AS1 lncRNA as a Predictive Biomarker for Neoadjuvant Chemotherapy Response in Locally Advanced Luminal B Breast Cancer: An RNA ISH Study

Authors: Tania Vasquez Mata, Luis A. Herrera, Cristian Arriaga Canon

Abstract:

Background: Locally advanced breast cancer of the luminal B phenotype, poses challenges due to its variable response to neoadjuvant chemotherapy. A predictive biomarker is needed to identify patients who will not respond to treatment, allowing for alternative therapies. This study aims to validate the use of the lncRNA GATA3-AS1, as a predictive biomarker using RNA in situ hybridization. Research aim: The aim of this study is to determine if GATA3-AS1 can serve as a biomarker for resistance to neoadjuvant chemotherapy in patients with locally advanced luminal B breast cancer. Methodology: The study utilizes RNA in situ hybridization with predesigned probes for GATA3-AS1 on Formalin-Fixed Paraffin-Embedded tissue sections. The samples underwent pretreatment and protease treatment to enable probe penetration. Chromogenic detection and signal evaluation were performed using specific criteria. Findings: Patients who did not respond to neoadjuvant chemotherapy showed a 3+ score for GATA3-AS1, while those who had a complete response had a 1+ score. Theoretical importance: This study demonstrates the potential clinical utility of GATA3-AS1 as a biomarker for resistance to neoadjuvant chemotherapy. Identifying non-responders early on can help avoid unnecessary treatment and explore alternative therapy options. Data collection and analysis procedures: Tissue samples from patients with locally advanced luminal B breast cancer were collected and processed using RNA in situ hybridization. Signal evaluation was conducted under a microscope, and scoring was based on specific criteria. Questions addressed: Can GATA3-AS1 serve as a predictive biomarker for neoadjuvant chemotherapy response in locally advanced luminal B breast cancer? Conclusion: The lncRNA GATA3-AS1 can be used as a biomarker for resistance to neoadjuvant chemotherapy in patients with locally advanced luminal B breast cancer. Its identification through RNA in situ hybridization of tissue obtained from the initial biopsy can aid in treatment decision-making.

Keywords: biomarkers, breast neoplasms, genetics, neoadjuvant therapy, tumor

Procedia PDF Downloads 57
2195 Arabic Handwriting Recognition Using Local Approach

Authors: Mohammed Arif, Abdessalam Kifouche

Abstract:

Optical character recognition (OCR) has a main role in the present time. It's capable to solve many serious problems and simplify human activities. The OCR yields to 70's, since many solutions has been proposed, but unfortunately, it was supportive to nothing but Latin languages. This work proposes a system of recognition of an off-line Arabic handwriting. This system is based on a structural segmentation method and uses support vector machines (SVM) in the classification phase. We have presented a state of art of the characters segmentation methods, after that a view of the OCR area, also we will address the normalization problems we went through. After a comparison between the Arabic handwritten characters & the segmentation methods, we had introduced a contribution through a segmentation algorithm.

Keywords: OCR, segmentation, Arabic characters, PAW, post-processing, SVM

Procedia PDF Downloads 71
2194 Quantitative Comparisons of Different Approaches for Rotor Identification

Authors: Elizabeth M. Annoni, Elena G. Tolkacheva

Abstract:

Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.

Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors

Procedia PDF Downloads 324