Search results for: per-residue decomposition
495 Localization of Pyrolysis and Burning of Ground Forest Fires
Authors: Pavel A. Strizhak, Geniy V. Kuznetsov, Ivan S. Voytkov, Dmitri V. Antonov
Abstract:
This paper presents the results of experiments carried out at a specialized test site for establishing macroscopic patterns of heat and mass transfer processes at localizing model combustion sources of ground forest fires with the use of barrier lines in the form of a wetted lay of material in front of the zone of flame burning and thermal decomposition. The experiments were performed using needles, leaves, twigs, and mixtures thereof. The dimensions of the model combustion source and the ranges of heat release correspond well to the real conditions of ground forest fires. The main attention is paid to the complex analysis of the effect of dispersion of water aerosol (concentration and size of droplets) used to form the barrier line. It is shown that effective conditions for localization and subsequent suppression of flame combustion and thermal decomposition of forest fuel can be achieved by creating a group of barrier lines with different wetting width and depth of the material. Relative indicators of the effectiveness of one and combined barrier lines were established, taking into account all the main characteristics of the processes of suppressing burning and thermal decomposition of forest combustible materials. We performed the prediction of the necessary and sufficient parameters of barrier lines (water volume, width, and depth of the wetted lay of the material, specific irrigation density) for combustion sources with different dimensions, corresponding to the real fire extinguishing practice.Keywords: forest fire, barrier water lines, pyrolysis front, flame front
Procedia PDF Downloads 132494 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: discrete wavelet transform, speech intelligibility, STOI, standard deviation
Procedia PDF Downloads 147493 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 289492 Body Farming in India and Asia
Authors: Yogesh Kumar, Adarsh Kumar
Abstract:
A body farm is a research facility where research is done on forensic investigation and medico-legal disciplines like forensic entomology, forensic pathology, forensic anthropology, forensic archaeology, and related areas of forensic veterinary. All the research is done to collect data on the rate of decomposition (animal and human) and forensically important insects to assist in crime detection. The data collected is used by forensic pathologists, forensic experts, and other experts for the investigation of crime cases and further research. The research work includes different conditions of a dead body like fresh, bloating, decay, dry, and skeleton, and data on local insects which depends on the climatic conditions of the local areas of that country. Therefore, it is the need of time to collect appropriate data in managed conditions with a proper set-up in every country. Hence, it is the duty of the scientific community of every country to establish/propose such facilities for justice and social management. The body farms are also used for training of police, military, investigative dogs, and other agencies. At present, only four countries viz. U.S., Australia, Canada, and Netherlands have body farms and related facilities in organised manner. There is no body farm in Asia also. In India, we have been trying to establish a body farm in A&N Islands that is near Singapore, Malaysia, and some other Asian countries. In view of the above, it becomes imperative to discuss the matter with Asian countries to collect the data on decomposition in a proper manner by establishing a body farm. We can also share the data, knowledge, and expertise to collaborate with one another to make such facilities better and have good scientific relations to promote science and explore ways of investigation at the world level.Keywords: body farm, rate of decomposition, forensically important flies, time since death
Procedia PDF Downloads 86491 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model
Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li
Abstract:
Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.Keywords: spatial information network, traffic prediction, wavelet decomposition, time series model
Procedia PDF Downloads 144490 Comparison of Soils of Hungarian Dry and Humid Oak Forests Based on Changes in Nutrient Content
Authors: István Fekete, Imre Berki, Áron Béni, Katalin Juhos, Marianna Makádi, Zsolt Kotroczó
Abstract:
The average annual precipitation significantly influences the moisture content of the soils and, through this, the decomposition of the organic substances in the soils, the leaching of nutrients from the soils, and the pH of the soils. Climate change, together with the lengthening of the vegetation period and the increasing CO₂ level, can increase the amount of biomass that is formed. Degradation processes, which accelerate as the temperature increases and slow down due to the drying climate, and the change in the degree of leaching can cancel out or strengthen each other's effects. In the course of our research, we looked for oak forests with climate-zonal soils where the geological, geographical and ecological background conditions are as similar as possible, apart from the different annual precipitation averages and the differences that can arise from them. We examined 5 dry and 5 humid Hungarian oak soils. Climate change affects the soils of drier and wetter forests differently. The aim of our research was to compare the content of carbon, nitrogen and some other nutrients, as well as the pH of the soils of humid and dry forests. Showing the effects of the drier climate on the tested soil parameters. In the case of the examined forest soils, we found a significant difference between the soils of dry and humid forests: in the case of the annual average precipitation values (p≥ 0.0001, for dry forest soils: 564±5.2 mm; for humid forest soils: 716±3.8 mm) for pH (p= 0.0004, for dry forest soils: 5.49±0.16; for wet forest soils: 5.36±0.21); for C content (p= 0.0054, for dry forest soils: 6.92%±0.59; for humid forest soils 3.09%±0.24), for N content (p= 0.0022, dry forest in the case of soils: 0.44%±0.047; in the case of humid forest soils: 0.23%±0.013), for the K content (p=0.0017, in the case of dry forest soils: 5684±732 (mg/kg); in the case of humid forest soils 2169±196 (mg/kg)), for the Ca content (p= 0.0096, for dry forest soils: 8207±2118 (mg/kg); for wet forest soils 957±320 (mg/kg)). No significant difference was found in the case of Mg. In a wetter environment, especially if the moisture content of the soil is also optimal for the decomposing organisms during the growing season, the decomposition of organic residues accelerates, and the processes of leaching from the soil are also intensified. The different intensity of the leaching processes is also well reflected in the quantitative differences of Ca and K, and in connection with these, it is also reflected in the difference in pH values. The differences in the C and N content can be explained by differences in the intensity of the decomposition processes. In addition to warming, drying is expected in a significant part of Hungary due to climate change. Thus, the comparison of the soils of dry and humid forests allows us to predict the subsequent changes in the case of the examined parameters.Keywords: soil nutrients, precipitation difference, climate change, organic matter decomposition, leaching
Procedia PDF Downloads 73489 Vibration Propagation in Structures Through Structural Intensity Analysis
Authors: Takhchi Jamal, Ouisse Morvan, Sadoulet-Reboul Emeline, Bouhaddi Noureddine, Gagliardini Laurent, Bornet Frederic, Lakrad Faouzi
Abstract:
Structural intensity is a technique that can be used to indicate both the magnitude and direction of power flow through a structure from the excitation source to the dissipation sink. However, current analysis is limited to the low frequency range. At medium and high frequencies, a rotational component appear in the field, masking the energy flow and make its understanding difficult or impossible. The objective of this work is to implement a methodology to filter out the rotational components of the structural intensity field in order to fully understand the energy flow in complex structures. The approach is based on the Helmholtz decomposition. It allows to decompose the structural intensity field into rotational, irrotational, and harmonic components. Only the irrotational component is needed to describe the net power flow from a source to a dissipative zone in the structure. The methodology has been applied on academic structures, and it allows a good analysis of the energy transfer paths.Keywords: structural intensity, power flow, helmholt decomposition, irrotational intensity
Procedia PDF Downloads 176488 Inverse Prediction of Thermal Parameters of an Annular Hyperbolic Fin Subjected to Thermal Stresses
Authors: Ashis Mallick, Rajeev Ranjan
Abstract:
The closed form solution for thermal stresses in an annular fin with hyperbolic profile is derived using Adomian decomposition method (ADM). The conductive-convective fin with variable thermal conductivity is considered in the analysis. The nonlinear heat transfer equation is efficiently solved by ADM considering insulated convective boundary conditions at the tip of fin. The constant of integration in the solution is to be estimated using minimum decomposition error method. The solution of temperature field is represented in a polynomial form for convenience to use in thermo-elasticity equation. The non-dimensional thermal stress fields are obtained using the ADM solution of temperature field coupled with the thermo-elasticity solution. The influence of the various thermal parameters in temperature field and stress fields are presented. In order to show the accuracy of the ADM solution, the present results are compared with the results available in literature. The stress fields in fin with hyperbolic profile are compared with those of uniform thickness profile. Result shows that hyperbolic fin profile is better choice for enhancing heat transfer. Moreover, less thermal stresses are developed in hyperbolic profile as compared to rectangular profile. Next, Nelder-Mead based simplex search method is employed for the inverse estimation of unknown non-dimensional thermal parameters in a given stress fields. Owing to the correlated nature of the unknowns, the best combinations of the model parameters which are satisfying the predefined stress field are to be estimated. The stress fields calculated using the inverse parameters give a very good agreement with the stress fields obtained from the forward solution. The estimated parameters are suitable to use for efficient and cost effective fin designing.Keywords: Adomian decomposition, inverse analysis, hyperbolic fin, variable thermal conductivity
Procedia PDF Downloads 327487 Examining Microbial Decomposition, Carbon Cycling and Storage in Cefni Coastal Salt Marsh, Anglesey Island, Wales, United Kingdom
Authors: Dasat G. S., Christopher F. Tim, J. Dun C.
Abstract:
Salt marshes are known to sequester carbon dioxide from the atmosphere into the soil, but natural and anthropogenic activities could trigger the release of large quantities of centuries of buried carbon dioxide, methane and nitrous oxide (CO2, CH4 and N2O) which are the major greenhouse gases (GHGs) implicated with climate change. Therefore, this study investigated the biogeochemical activities by collecting soil samples from low, mid and high zones of the Cefni salt marsh, within the Maltreat estuary, on the island of Anglesey, north Wales, United Kingdom for a consortium of laboratory based experiments using standard operating protocols (POS) to quantify the soil organic matter contents and the rate of microbial decomposition and carbon storage at the Carbon Capture Laboratory of Bangor University Wales. Results of investigations reveals that the mid zone had 56.23% and 9.98% of soil water and soil organic matter (SOM) contents respectively higher than the low and high zones. Phenol oxidase activity (1193.53µmol dicq g-1 h-1) was highest at the low zone in comparison to the high and mid zones (867.60 and 608.74 µmol dicq g-1 h-1) respectively. Soil phenolic concentration was found to be highest in the mid zone (53.25 µg-1 g-1) when compared with those from the high (15.66 µg-1 g-1) and low (4.18 µg-1 g-1) zones respectively. Activities of hydrolase enzymes showed similar trend for the high and low zones and much lower activities in the mid zone. CO2 flux from the mid zone (6.79 ug g-1 h-1) was significantly greater than those from high (-2.29 ug g-1 h-1) and low (1.30 µg g-1 h-1) zones. Since salt marshes provide essential ecosystem services, their degradation or alteration in whatever form could compromise such ecosystem services and could convert them from net sinks into net sources with consequential effects to the global environment.Keywords: saltmarsh, decomposition, carbon cycling, enzymes
Procedia PDF Downloads 81486 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis
Authors: Yao Cheng, Weihua Zhang
Abstract:
Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution
Procedia PDF Downloads 372485 The Continuously Supported Infinity Rail Subjected to a Moving Complex Bogie System
Authors: Vladimir Stojanović, Marko D. Petković
Abstract:
The vibration of a complex bogie system that moves on along the high order shear deformable beam on a viscoelastic foundation is studied. The complex bogie system has been modeled by elastically connected rigid bars on an identical supports. Elastic coupling between bars is introduced to simulate rigidly or flexibly (transversal or/and rotational) connection. Identical supports are modeled as a system of attached spring and dashpot to the bar on one side and interact with the beam through the concentrated mass on the other side. It is assumed that the masses and the beam are always in contact. New analytically determined critical velocity of the system is presented. It is analyzed the case when the complex bogie system exceeds the minimum phase velocity of waves in the beam when the vibration of the system may become unstable. Effect of an elastic coupling between bars on the stability of the system has been analyzed. The instability regions are found for the complex bogie system by applying the principle of the argument and D-decomposition method.Keywords: Reddy-Bickford beam, D-decomposition method, principle of argument, critical velocity
Procedia PDF Downloads 305484 Evolution of Microstructure through Phase Separation via Spinodal Decomposition in Spinel Ferrite Thin Films
Authors: Nipa Debnath, Harinarayan Das, Takahiko Kawaguchi, Naonori Sakamoto, Kazuo Shinozaki, Hisao Suzuki, Naoki Wakiya
Abstract:
Nowadays spinel ferrite magnetic thin films have drawn considerable attention due to their interesting magnetic and electrical properties with enhanced chemical and thermal stability. Spinel ferrite magnetic films can be implemented in magnetic data storage, sensors, and spin filters or microwave devices. It is well established that the structural, magnetic and transport properties of the magnetic thin films are dependent on microstructure. Spinodal decomposition (SD) is a phase separation process, whereby a material system is spontaneously separated into two phases with distinct compositions. The periodic microstructure is the characteristic feature of SD. Thus, SD can be exploited to control the microstructure at the nanoscale level. In bulk spinel ferrites having general formula, MₓFe₃₋ₓ O₄ (M= Co, Mn, Ni, Zn), phase separation via SD has been reported only for cobalt ferrite (CFO); however, long time post-annealing is required to occur the spinodal decomposition. We have found that SD occurs in CoF thin film without using any post-deposition annealing process if we apply magnetic field during thin film growth. Dynamic Aurora pulsed laser deposition (PLD) is a specially designed PLD system through which in-situ magnetic field (up to 2000 G) can be applied during thin film growth. The in-situ magnetic field suppresses the recombination of ions in the plume. In addition, the peak’s intensity of the ions in the spectra of the plume also increases when magnetic field is applied to the plume. As a result, ions with high kinetic energy strike into the substrate. Thus, ion-impingement occurred under magnetic field during thin film growth. The driving force of SD is the ion-impingement towards the substrates that is induced by in-situ magnetic field. In this study, we report about the occurrence of phase separation through SD and evolution of microstructure after phase separation in spinel ferrite thin films. The surface morphology of the phase separated films show checkerboard like domain structure. The cross-sectional microstructure of the phase separated films reveal columnar type phase separation. Herein, the decomposition wave propagates in lateral direction which has been confirmed from the lateral composition modulations in spinodally decomposed films. Large magnetic anisotropy has been found in spinodally decomposed nickel ferrite (NFO) thin films. This approach approves that magnetic field is also an important thermodynamic parameter to induce phase separation by the enhancement of up-hill diffusion in thin films. This thin film deposition technique could be a more efficient alternative for the fabrication of self-organized phase separated thin films and employed in controlling of the microstructure at nanoscale level.Keywords: Dynamic Aurora PLD, magnetic anisotropy, spinodal decomposition, spinel ferrite thin film
Procedia PDF Downloads 366483 One Dimensional Unsteady Boundary Layer Flow in an Inclined Wavy Wall of a Nanofluid with Convective Boundary Condition
Authors: Abdulhakeem Yusuf, Yomi Monday Aiyesimi, Mohammed Jiya
Abstract:
The failure in an ordinary heat transfer fluid to meet up with today’s industrial cooling rate has resulted in the development of high thermal conductivity fluid which nanofluids belongs. In this work, the problem of unsteady one dimensional laminar flow of an incompressible fluid within a parallel wall is considered with one wall assumed to be wavy. The model is presented in its rectangular coordinate system and incorporates the effects of thermophoresis and Brownian motion. The local similarity solutions were also obtained which depends on Soret number, Dufour number, Biot number, Lewis number, and heat generation parameter. The analytical solution is obtained in a closed form via the Adomian decomposition method. It was found that the method has a good agreement with the numerical method, and it is also established that the heat generation parameter has to be kept low so that heat energy are easily evacuated from the system.Keywords: Adomian decomposition method, Biot number, Dufour number, nanofluid
Procedia PDF Downloads 329482 A Multi Sensor Monochrome Video Fusion Using Image Quality Assessment
Authors: M. Prema Kumar, P. Rajesh Kumar
Abstract:
The increasing interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. This paper gives a novel approach of merging the information content from several videos taken from the same scene in order to rack up a combined video that contains the finest information coming from different source videos. This process is known as video fusion which helps in providing superior quality (The term quality, connote measurement on the particular application.) image than the source images. In this technique different sensors (whose redundant information can be reduced) are used for various cameras that are imperative for capturing the required images and also help in reducing. In this paper Image fusion technique based on multi-resolution singular value decomposition (MSVD) has been used. The image fusion by MSVD is almost similar to that of wavelets. The idea behind MSVD is to replace the FIR filters in wavelet transform with singular value decomposition (SVD). It is computationally very simple and is well suited for real time applications like in remote sensing and in astronomy.Keywords: multi sensor image fusion, MSVD, image processing, monochrome video
Procedia PDF Downloads 570481 Fast and Accurate Model to Detect Ictal Waveforms in Electroencephalogram Signals
Authors: Piyush Swami, Bijaya Ketan Panigrahi, Sneh Anand, Manvir Bhatia, Tapan Gandhi
Abstract:
Visual inspection of electroencephalogram (EEG) signals to detect epileptic signals is very challenging and time-consuming task even for any expert neurophysiologist. This problem is most challenging in under-developed and developing countries due to shortage of skilled neurophysiologists. In the past, notable research efforts have gone in trying to automate the seizure detection process. However, due to high false alarm detections and complexity of the models developed so far, have vastly delimited their practical implementation. In this paper, we present a novel scheme for epileptic seizure detection using empirical mode decomposition technique. The intrinsic mode functions obtained were then used to calculate the standard deviations. This was followed by probability density based classifier to discriminate between non-ictal and ictal patterns in EEG signals. The model presented here demonstrated very high classification rates ( > 97%) without compromising the statistical performance. The computation timings for each testing phase were also very low ( < 0.029 s) which makes this model ideal for practical applications.Keywords: electroencephalogram (EEG), epilepsy, ictal patterns, empirical mode decomposition
Procedia PDF Downloads 404480 Multidimensional Study on the Deprivations Faced by Women in India
Authors: Ramya Rachel S.
Abstract:
For women in a developing country like India, poverty is an ever-clinging problem which has rooted itself without any trace of absolute abolition. Poverty is a deprivation of many imminent needs and must be measured accordingly. Therefore, it is important to study the dimensions of education, health, and standard of living to understand the true nature of the impoverished. The study focused on studying the deprivation on these aspects using the Alkire-Foster methodology to estimate the Multidimensional Poverty Index. The study has utilized the individual data of women aged 15 to 49 of the National Family Health Survey (NFHS) for the year 2015-16. Findings reveal that women in India still face extreme levels of deprivation in various dimensions. More than one-third of the total women aged 15 to 24 in India were multidimensionally poor. Dimensional breakdown of the levels of multidimensional poverty indicates that the dimension of Education is the highest contributor to poverty. Decomposition of the multidimensional poverty among various demographic sub-groups, reveals that the multidimensional poverty level increases with age. Results point out that deprivations were higher among widowed and married women, and among women who lived alone. There was also a huge rural-urban divide with respect to poverty. The basic needs of these women must be targeted and met so that they are withdrawn from all forms of poverty.Keywords: deprivations, multidimensional poverty, sub-group decomposition, women
Procedia PDF Downloads 135479 Labview-Based System for Fiber Links Events Detection
Authors: Bo Liu, Qingshan Kong, Weiqing Huang
Abstract:
With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.Keywords: empirical mode decomposition, events detection, Gabor transform, optical time domain reflectometer, wavelet threshold denoising
Procedia PDF Downloads 122478 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
Authors: Qianhua He, Weili Zhou, Aiwu Chen
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.Keywords: speech denoising, sparse representation, k-singular value decomposition, orthogonal matching pursuit
Procedia PDF Downloads 499477 Multi-Objective Four-Dimensional Traveling Salesman Problem in an IoT-Based Transport System
Authors: Arindam Roy, Madhushree Das, Apurba Manna, Samir Maity
Abstract:
In this research paper, an algorithmic approach is developed to solve a novel multi-objective four-dimensional traveling salesman problem (MO4DTSP) where different paths with various numbers of conveyances are available to travel between two cities. NSGA-II and Decomposition algorithms are modified to solve MO4DTSP in an IoT-based transport system. This IoT-based transport system can be widely observed, analyzed, and controlled by an extensive distribution of traffic networks consisting of various types of sensors and actuators. Due to urbanization, most of the cities are connected using an intelligent traffic management system. Practically, for a traveler, multiple routes and vehicles are available to travel between any two cities. Thus, the classical TSP is reformulated as multi-route and multi-vehicle i.e., 4DTSP. The proposed MO4DTSP is designed with traveling cost, time, and customer satisfaction as objectives. In reality, customer satisfaction is an important parameter that depends on travel costs and time reflects in the present model.Keywords: multi-objective four-dimensional traveling salesman problem (MO4DTSP), decomposition, NSGA-II, IoT-based transport system, customer satisfaction
Procedia PDF Downloads 109476 Energy Related Carbon Dioxide Emissions in Pakistan: A Decomposition Analysis Using LMDI
Authors: Arsalan Khan, Faisal Jamil
Abstract:
The unprecedented increase in anthropogenic gases in recent decades has led to climatic changes worldwide. CO2 emissions are the most important factors responsible for greenhouse gases concentrations. This study decomposes the changes in overall CO2 emissions in Pakistan for the period 1990-2012 using Log Mean Divisia Index (LMDI). LMDI enables to decompose the changes in CO2 emissions into five factors namely; activity effect, structural effect, intensity effect, fuel-mix effect, and emissions factor effect. This paper confirms an upward trend of overall emissions level of the country during the period. The study finds that activity effect, structural effect and intensity effect are the three major factors responsible for the changes in overall CO2 emissions in Pakistan with activity effect as the largest contributor to overall changes in the emissions level. The structural effect is also adding to CO2 emissions, which indicates that the economic activity is shifting towards more energy-intensive sectors. However, intensity effect has negative sign representing energy efficiency gains, which indicate a good relationship between the economy and environment. The findings suggest that policy makers should encourage the diversification of the output level towards more energy efficient sub-sectors of the economy.Keywords: energy consumption, CO2 emissions, decomposition analysis, LMDI, intensity effect
Procedia PDF Downloads 398475 Novel Recommender Systems Using Hybrid CF and Social Network Information
Authors: Kyoung-Jae Kim
Abstract:
Collaborative Filtering (CF) is a popular technique for the personalization in the E-commerce domain to reduce information overload. In general, CF provides recommending items list based on other similar users’ preferences from the user-item matrix and predicts the focal user’s preference for particular items by using them. Many recommender systems in real-world use CF techniques because it’s excellent accuracy and robustness. However, it has some limitations including sparsity problems and complex dimensionality in a user-item matrix. In addition, traditional CF does not consider the emotional interaction between users. In this study, we propose recommender systems using social network and singular value decomposition (SVD) to alleviate some limitations. The purpose of this study is to reduce the dimensionality of data set using SVD and to improve the performance of CF by using emotional information from social network data of the focal user. In this study, we test the usability of hybrid CF, SVD and social network information model using the real-world data. The experimental results show that the proposed model outperforms conventional CF models.Keywords: recommender systems, collaborative filtering, social network information, singular value decomposition
Procedia PDF Downloads 289474 Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition
Authors: Sourour Khouaja, Hejer Jlassi, Nadia Feddaoui, Kamel Hamrouni
Abstract:
Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%.Keywords: BEMD, breast density, contend-based, image retrieval, mammography
Procedia PDF Downloads 231473 Clustering Color Space, Time Interest Points for Moving Objects
Authors: Insaf Bellamine, Hamid Tairi
Abstract:
Detecting moving objects in sequences is an essential step for video analysis. This paper mainly contributes to the Color Space-Time Interest Points (CSTIP) extraction and detection. We propose a new method for detection of moving objects. Two main steps compose the proposed method. First, we suggest to apply the algorithm of the detection of Color Space-Time Interest Points (CSTIP) on both components of the Color Structure-Texture Image Decomposition which is based on a Partial Differential Equation (PDE): a color geometric structure component and a color texture component. A descriptor is associated to each of these points. In a second stage, we address the problem of grouping the points (CSTIP) into clusters. Experiments and comparison to other motion detection methods on challenging sequences show the performance of the proposed method and its utility for video analysis. Experimental results are obtained from very different types of videos, namely sport videos and animation movies.Keywords: Color Space-Time Interest Points (CSTIP), Color Structure-Texture Image Decomposition, Motion Detection, clustering
Procedia PDF Downloads 376472 System Identification in Presence of Outliers
Authors: Chao Yu, Qing-Guo Wang, Dan Zhang
Abstract:
The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising
Procedia PDF Downloads 306471 Comparative Analysis of Enzyme Activities Concerned in Decomposition of Toluene
Authors: Ayuko Itsuki, Sachiyo Aburatani
Abstract:
In recent years, pollutions of the environment by toxic substances become a serious problem. While there are many methods of environmental clean-up, the methods by microorganisms are considered to be reasonable and safety for environment. Compost is known that it catabolize the meladorous substancess in its production process, however the mechanism of its catabolizing system is not known yet. In the catabolization process, organic matters turn into inorganic by the released enzymes from lots of microorganisms which live in compost. In other words, the cooperative of activated enzymes in the compost decomposes malodorous substances. Thus, clarifying the interaction among enzymes is important for revealing the catabolizing system of meladorous substance in compost. In this study, we utilized statistical method to infer the interaction among enzymes. We developed a method which combined partial correlation with cross correlation to estimate the relevance between enzymes especially from time series data of few variables. Because of using cross correlation, we can estimate not only the associative structure but also the reaction pathway. We applied the developed method to the enzyme measured data and estimated an interaction among the enzymes in decomposition mechanism of toluene.Keywords: enzyme activities, comparative analysis, compost, toluene
Procedia PDF Downloads 272470 The Classification of Parkinson Tremor and Essential Tremor Based on Frequency Alteration of Different Activities
Authors: Chusak Thanawattano, Roongroj Bhidayasiri
Abstract:
This paper proposes a novel feature set utilized for classifying the Parkinson tremor and essential tremor. Ten ET and ten PD subjects are asked to perform kinetic, postural and resting tests. The empirical mode decomposition (EMD) is used to decompose collected tremor signal to a set of intrinsic mode functions (IMF). The IMFs are used for reconstructing representative signals. The feature set is composed of peak frequencies of IMFs and reconstructed signals. Hypothesize that the dominant frequency components of subjects with PD and ET change in different directions for different tests, difference of peak frequencies of IMFs and reconstructed signals of pairwise based tests (kinetic-resting, kinetic-postural and postural-resting) are considered as potential features. Sets of features are used to train and test by classifier including the quadratic discriminant classifier (QLC) and the support vector machine (SVM). The best accuracy, the best sensitivity and the best specificity are 90%, 87.5%, and 92.86%, respectively.Keywords: tremor, Parkinson, essential tremor, empirical mode decomposition, quadratic discriminant, support vector machine, peak frequency, auto-regressive, spectrum estimation
Procedia PDF Downloads 442469 A TFETI Domain Decompositon Solver for von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening
Authors: Martin Cermak, Stanislav Sysala
Abstract:
In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MATLAB.Keywords: isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution
Procedia PDF Downloads 420468 Implementation of Integer Sub-Decomposition Method on Elliptic Curves with J-Invariant 1728
Authors: Siti Noor Farwina Anwar, Hailiza Kamarulhaili
Abstract:
In this paper, we present the idea of implementing the Integer Sub-Decomposition (ISD) method on elliptic curves with j-invariant 1728. The ISD method was proposed in 2013 to compute scalar multiplication in elliptic curves, which remains to be the most expensive operation in Elliptic Curve Cryptography (ECC). However, the original ISD method only works on integer number field and solve integer scalar multiplication. By extending the method into the complex quadratic field, we are able to solve complex multiplication and implement the ISD method on elliptic curves with j-invariant 1728. The curve with j-invariant 1728 has a unique discriminant of the imaginary quadratic field. This unique discriminant of quadratic field yields a unique efficiently computable endomorphism, which later able to speed up the computations on this curve. However, the ISD method needs three endomorphisms to be accomplished. Hence, we choose all three endomorphisms to be from the same imaginary quadratic field as the curve itself, where the first endomorphism is the unique endomorphism yield from the discriminant of the imaginary quadratic field.Keywords: efficiently computable endomorphism, elliptic scalar multiplication, j-invariant 1728, quadratic field
Procedia PDF Downloads 197467 Investigating Methanol Interaction on Hexagonal Ceria-BTC Microrods
Authors: Jamshid Hussain, Kuen Song Lin
Abstract:
For prospective applications, chemists and materials scientists are particularly interested in creating 3D-micro/nanocomposite structures with shapes and unique characteristics. Ceria has recently been produced with a variety of morphologies, including one-dimensional structures (nanoparticles, nanorods, nanowires, and nanotubes). It is anticipated that this material can be used in different fields, such as catalysis, methanol decomposition, carbon monoxide oxidation, optical materials, and environmental protection. Distinct three-dimensional hydrated ceria-BTC (CeO₂-1,3,5-Benzenetricarboxylic-acid) microstructures were successfully synthesized via a hydrothermal route in an aqueous solution. FE-SEM and XRD patterns reveal that a ceria-BTC framework diameter and length are approximately 1.45–2.4 and 5.5–6.5 µm, respectively, at 130 oC and with pH 2 for 72 h. It was demonstrated that the reaction conditions affected the 3D ceria-BTC architecture. The hexagonal ceria-BTC microrod comprises organic linkers, which are transformed into hierarchical ceria microrod in the presences of air at 400 oC was confirmed by Fourier transform infrared spectroscopy. The Ce-O bonding of the hierarchical ceria microrod (HCMs) species has a bond distance and coordination number of 2.44 and 6.89, respectively, which attenuates the EXAFS spectra. Compared to the ceria powder, the HCMs produced more oxygen vacancies and Ce3+ as shown by the XPS and XANES/EXAFS analyses.Keywords: hierarchical ceria microrod, three-dimensional ceria, methanol decomposition, reaction mechanism, XANES/EXAFS
Procedia PDF Downloads 7466 Using the Combination of Food Waste and Animal Waste as a Reliable Energy Source in Rural Guatemala
Authors: Jina Lee
Abstract:
Methane gas is a common byproduct in any process of rot and degradation of organic matter. This gas, when decomposition occurs, is emitted directly into the atmosphere. Methane is the simplest alkane hydrocarbon that exists. Its chemical formula is CH₄. This means that there are four atoms of hydrogen and one of carbon, which is linked by covalent bonds. Methane is found in nature in the form of gas at normal temperatures and pressures. In addition, it is colorless and odorless, despite being produced by the rot of plants. It is a non-toxic gas, and the only real danger is that of burns if it were to ignite. There are several ways to generate methane gas in homes, and the amount of methane gas generated by the decomposition of organic matter varies depending on the type of matter in question. An experiment was designed to measure the efficiency, such as a relationship between the amount of raw material and the amount of gas generated, of three different mixtures of organic matter: 1. food remains of home; 2. animal waste (excrement) 3. equal parts mixing of food debris and animal waste. The results allowed us to conclude which of the three mixtures is the one that grants the highest efficiency in methane gas generation and which would be the most suitable for methane gas generation systems for homes in order to occupy less space generating an equal amount of gas.Keywords: alternative energy source, energy conversion, methane gas conversion system, waste management
Procedia PDF Downloads 162