Search results for: algebraic decomposition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 725

Search results for: algebraic decomposition

605 Analysis of Nonlinear and Non-Stationary Signal to Extract the Features Using Hilbert Huang Transform

Authors: A. N. Paithane, D. S. Bormane, S. D. Shirbahadurkar

Abstract:

It has been seen that emotion recognition is an important research topic in the field of Human and computer interface. A novel technique for Feature Extraction (FE) has been presented here, further a new method has been used for human emotion recognition which is based on HHT method. This method is feasible for analyzing the nonlinear and non-stationary signals. Each signal has been decomposed into the IMF using the EMD. These functions are used to extract the features using fission and fusion process. The decomposition technique which we adopt is a new technique for adaptively decomposing signals. In this perspective, we have reported here potential usefulness of EMD based techniques.We evaluated the algorithm on Augsburg University Database; the manually annotated database.

Keywords: intrinsic mode function (IMF), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), emotion detection, electrocardiogram (ECG)

Procedia PDF Downloads 543
604 A Heuristic Based Decomposition Approach for a Hierarchical Production Planning Problem

Authors: Nusrat T. Chowdhury, M. F. Baki, A. Azab

Abstract:

The production planning problem is concerned with specifying the optimal quantities to produce in order to meet the demand for a prespecified planning horizon with the least possible expenditure. Making the right decisions in production planning will affect directly the performance and productivity of a manufacturing firm, which is important for its ability to compete in the market. Therefore, developing and improving solution procedures for production planning problems is very significant. In this paper, we develop a Dantzig-Wolfe decomposition of a multi-item hierarchical production planning problem with capacity constraint and present a column generation approach to solve the problem. The original Mixed Integer Linear Programming model of the problem is decomposed item by item into a master problem and a number of subproblems. The capacity constraint is considered as the linking constraint between the master problem and the subproblems. The subproblems are solved using the dynamic programming approach. We also propose a multi-step iterative capacity allocation heuristic procedure to handle any kind of infeasibility that arises while solving the problem. We compare the computational performance of the developed solution approach against the state-of-the-art heuristic procedure available in the literature. The results show that the proposed heuristic-based decomposition approach improves the solution quality by 20% as compared to the literature.

Keywords: inventory, multi-level capacitated lot-sizing, emission control, setup carryover

Procedia PDF Downloads 103
603 Tempo-Spatial Pattern of Progress and Disparity in Child Health in Uttar Pradesh, India

Authors: Gudakesh Yadav

Abstract:

Uttar Pradesh is one of the poorest performing states of India in terms of child health. Using data from the three round of NFHS and two rounds of DLHS, this paper attempts to examine tempo-spatial change in child health and care practices in Uttar Pradesh and its regions. Rate-ratio, CI, multivariate, and decomposition analysis has been used for the study. Findings demonstrate that child health care practices have improved over the time in all regions of the state. However; western and southern region registered the lowest progress in child immunization. Nevertheless, there is no decline in prevalence of diarrhea and ARI over the period, and it remains critically high in the western and southern region. These regions also poorly performed in giving ORS, diarrhoea and ARI treatment. Public health services are least preferred for diarrhoea and ARI treatment. Results from decomposition analysis reveal that rural area, mother’s illiteracy and wealth contributed highest to the low utilization of the child health care practices consistently over the period of time. The study calls for targeted intervention for vulnerable children to accelerate child health care service utilization. Poor performing regions should be targeted and routinely monitored on poor child health indicators.

Keywords: Acute Respiratory Infection (ARI), decomposition, diarrhea, inequality, immunization

Procedia PDF Downloads 274
602 A Study on the Solutions of the 2-Dimensional and Forth-Order Partial Differential Equations

Authors: O. Acan, Y. Keskin

Abstract:

In this study, we will carry out a comparative study between the reduced differential transform method, the adomian decomposition method, the variational iteration method and the homotopy analysis method. These methods are used in many fields of engineering. This is been achieved by handling a kind of 2-Dimensional and forth-order partial differential equations called the Kuramoto–Sivashinsky equations. Three numerical examples have also been carried out to validate and demonstrate efficiency of the four methods. Furthermost, it is shown that the reduced differential transform method has advantage over other methods. This method is very effective and simple and could be applied for nonlinear problems which used in engineering.

Keywords: reduced differential transform method, adomian decomposition method, variational iteration method, homotopy analysis method

Procedia PDF Downloads 404
601 A Study on Ideals and Prime Ideals of Sub-Distributive Semirings and Its Applications to Symmetric Fuzzy Numbers

Authors: Rosy Joseph

Abstract:

From an algebraic point of view, Semirings provide the most natural generalization of group theory and ring theory. In the absence of additive inverse in a semiring, one had to impose a weaker condition on the semiring, i.e., the additive cancellative law to study interesting structural properties. In many practical situations, fuzzy numbers are used to model imprecise observations derived from uncertain measurements or linguistic assessments. In this connection, a special class of fuzzy numbers whose shape is symmetric with respect to a vertical line called the symmetric fuzzy numbers i.e., for α ∈ (0, 1] the α − cuts will have a constant mid-point and the upper end of the interval will be a non-increasing function of α, the lower end will be the image of this function, is suitable. Based on this description, arithmetic operations and a ranking technique to order the symmetric fuzzy numbers were dealt with in detail. Wherein it was observed that the structure of the class of symmetric fuzzy numbers forms a commutative semigroup with cancellative property. Also, it forms a multiplicative monoid satisfying sub-distributive property.In this paper, we introduce the algebraic structure, sub-distributive semiring and discuss its various properties viz., ideals and prime ideals of sub-distributive semiring, sub-distributive ring of difference etc. in detail. Symmetric fuzzy numbers are visualized as an illustration.

Keywords: semirings, subdistributive ring of difference, subdistributive semiring, symmetric fuzzy numbers

Procedia PDF Downloads 177
600 A Discovery of the Dual Sequential Pattern of Prime Numbers in P x P: Applications in a Formal Proof of the Twin-Prime Conjecture

Authors: Yingxu Wang

Abstract:

This work presents basic research on the recursive structures and dual sequential patterns of primes for the formal proof of the Twin-Prime Conjecture (TPC). A rigorous methodology of Twin-Prime Decomposition (TPD) is developed in MATLAB to inductively verify potential twins in the dual sequences of primes. The key finding of this basic study confirms that the dual sequences of twin primes are not only symmetric but also infinitive in the unique base 6 cycle, except a limited subset of potential pairs is eliminated by the lack of dual primality. Both theory and experiments have formally proven that the infinity of twin primes stated in TPC holds in the P x P space.

Keywords: number theory, primes, twin-prime conjecture, dual primes (P x P), twin prime decomposition, formal proof, algorithm

Procedia PDF Downloads 27
599 Carbon Sequestration and Carbon Stock Potential of Major Forest Types in the Foot Hills of Nilgiri Biosphere Reserve, India

Authors: B. Palanikumaran, N. Kanagaraj, M. Sangareswari, V. Sailaja, Kapil Sihag

Abstract:

The present study aimed to estimate the carbon sequestration potential of major forest types present in the foothills of Nilgiri biosphere reserve. The total biomass carbon stock was estimated in tropical thorn forest, tropical dry deciduous forest and tropical moist deciduous forest as 14.61 t C ha⁻¹ 75.16 t C ha⁻¹ and 187.52 t C ha⁻¹ respectively. The density and basal area were estimated in tropical thorn forest, tropical dry deciduous forest, tropical moist deciduous forest as 173 stems ha⁻¹, 349 stems ha⁻¹, 391 stems ha⁻¹ and 6.21 m² ha⁻¹, 31.09 m² ha⁻¹, 67.34 m² ha⁻¹ respectively. The soil carbon stock of different forest ecosystems was estimated, and the results revealed that tropical moist deciduous forest (71.74 t C ha⁻¹) accounted for more soil carbon stock when compared to tropical dry deciduous forest (31.80 t C ha⁻¹) and tropical thorn forest (3.99 t C ha⁻¹). The tropical moist deciduous forest has the maximum annual leaf litter which was 12.77 t ha⁻¹ year⁻¹ followed by 6.44 t ha⁻¹ year⁻¹ litter fall of tropical dry deciduous forest. The tropical thorn forest accounted for 3.42 t ha⁻¹ yr⁻¹ leaf litter production. The leaf litter carbon stock of tropical thorn forest, tropical dry deciduous forest and tropical moist deciduous forest found to be 1.02 t C ha⁻¹ yr⁻¹ 2.28 t⁻¹ C ha⁻¹ yr⁻¹ and 5.42 t C ha⁻¹ yr⁻¹ respectively. The results explained that decomposition percent at the soil surface in the following order.tropical dry deciduous forest (77.66 percent) > tropical thorn forest (69.49 percent) > tropical moist deciduous forest (63.17 percent). Decomposition percent at soil subsurface was studied, and the highest decomposition percent was observed in tropical dry deciduous forest (80.52 percent) followed by tropical moist deciduous forest (77.65 percent) and tropical thorn forest (72.10 percent). The decomposition percent was higher at soil subsurface. Among the three forest type, tropical moist deciduous forest accounted for the highest bacterial (59.67 x 105cfu’s g⁻¹ soil), actinomycetes (74.87 x 104cfu’s g⁻¹ soil) and fungal (112.60 x10³cfu’s g⁻¹ soil) population. The overall observation of the study helps to conclude that, the tropical moist deciduous forest has the potential of storing higher carbon content as biomass with the value of 264.68 t C ha⁻¹ and microbial populations.

Keywords: basal area, carbon sequestration, carbon stock, Nilgiri biosphere reserve

Procedia PDF Downloads 138
598 Thermal Stability and Insulation of a Cement Mixture Using Graphene Oxide Nanosheets

Authors: Nasser A. M. Habib

Abstract:

The impressive physical properties of graphene derivatives, including thermal properties, have made them an attractive addition to advanced construction nanomaterial. In this study, we investigated the impact of incorporating low amounts of graphene oxide (GO) into cement mixture nanocomposites on their heat storage and thermal stability. The composites were analyzed using Fourier transmission infrared, thermo-gravimetric analysis, and field emission scanning electron microscopy. Results showed that GO significantly improved specific heat by 30%, reduced thermal conductivity by 15%, and reduced thermal decomposition to only 3% at a concentration of 1.2 wt%. These findings suggest that the cement mixture can withstand high temperatures and may be suitable for specific applications requiring thermal stability and insulation properties.

Keywords: cement mixture composite, graphene oxide, thermal decomposition, thermal conductivity

Procedia PDF Downloads 25
597 A Discovery on the Symmetrical Pattern of Mirror Primes in P²: Applications in the Formal Proof of the Goldbach Conjecture

Authors: Yingxu Wang

Abstract:

The base 6 structure and properties of mirror primes are discovered in this work towards the proof of Goldbach Conjecture. This paper reveals a fundamental pattern on pairs of mirror primes adjacent to any even number nₑ > 2 with symmetrical distances on both sides determined by a methodology of Mirror Prime Decomposition (MPD). MPD leads to a formal proof of the Goldbach conjecture, which states that the conjecture holds because any pivot even number, nₑ > 2, is a sum of at least an adjacent pair of primes divided by 2. This work has not only revealed the analytic pattern of base 6 primes but also proven the infinitive validation of the Goldbach conjecture.

Keywords: number theory, primes, mirror primes, double recursive patterns, Goldbach conjecture, formal proof, mirror-prime decomposition, applications

Procedia PDF Downloads 17
596 A Quantum Leap: Developing Quantum Semi-Structured Complex Numbers to Solve the “Division by Zero” Problem

Authors: Peter Jean-Paul, Shanaz Wahid

Abstract:

The problem of division by zero can be stated as: “what is the value of 0 x 1/0?” This expression has been considered undefined by mathematicians because it can have two equally valid solutions either 0 or 1. Recently semi-structured complex number set was invented to solve “division by zero”. However, whilst the number set had some merits it was considered to have a poor theoretical foundation and did not provide a quality solution to “division by zero”. Moreover, the set lacked consistency in simple algebraic calculations producing contradictory results when dividing by zero. To overcome these issues this research starts by treating the expression " 0 x 1/0" as a quantum mechanical system that produces two tangled results 0 and 1. Dirac Notation (a tool from quantum mechanics) was then used to redefine the unstructured unit p in semi-structured complex numbers so that p represents the superposition of two results (0 and 1) and collapses into a single value when used in algebraic expressions. In the process, this paper describes a new number set called Quantum Semi-structured Complex Numbers that provides a valid solution to the problem of “division by zero”. This research shows that this new set (1) forms a “Field”, (2) can produce consistent results when solving division by zero problems, (3) can be used to accurately describe systems whose mathematical descriptions involve division by zero. This research served to provide a firm foundation for Quantum Semi-structured Complex Numbers and support their practical use.

Keywords: division by zero, semi-structured complex numbers, quantum mechanics, Hilbert space, Euclidean space

Procedia PDF Downloads 128
595 Enhancement of Pulsed Eddy Current Response Based on Power Spectral Density after Continuous Wavelet Transform Decomposition

Authors: A. Benyahia, M. Zergoug, M. Amir, M. Fodil

Abstract:

The main objective of this work is to enhance the Pulsed Eddy Current (PEC) response from the aluminum structure using signal processing. Cracks and metal loss in different structures cause changes in PEC response measurements. In this paper, time-frequency analysis is used to represent PEC response, which generates a large quantity of data and reduce the noise due to measurement. Power Spectral Density (PSD) after Wavelet Decomposition (PSD-WD) is proposed for defect detection. The experimental results demonstrate that the cracks in the surface can be extracted satisfactorily by the proposed methods. The validity of the proposed method is discussed.

Keywords: DT, pulsed eddy current, continuous wavelet transform, Mexican hat wavelet mother, defect detection, power spectral density.

Procedia PDF Downloads 205
594 Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Temporal Convolutional Network for Remaining Useful Life Prediction of Lithium Ion Batteries

Authors: Jing Zhao, Dayong Liu, Shihao Wang, Xinghua Zhu, Delong Li

Abstract:

Uhumanned Underwater Vehicles generally operate in the deep sea, which has its own unique working conditions. Lithium-ion power batteries should have the necessary stability and endurance for use as an underwater vehicle’s power source. Therefore, it is essential to accurately forecast how long lithium-ion batteries will last in order to maintain the system’s reliability and safety. In order to model and forecast lithium battery Remaining Useful Life (RUL), this research suggests a model based on Complete Ensemble Empirical Mode Decomposition with Adaptive noise-Temporal Convolutional Net (CEEMDAN-TCN). In this study, two datasets, NASA and CALCE, which have a specific gap in capacity data fluctuation, are used to verify the model and examine the experimental results in order to demonstrate the generalizability of the concept. The experiments demonstrate the network structure’s strong universality and ability to achieve good fitting outcomes on the test set for various battery dataset types. The evaluation metrics reveal that the CEEMDAN-TCN prediction performance of TCN is 25% to 35% better than that of a single neural network, proving that feature expansion and modal decomposition can both enhance the model’s generalizability and be extremely useful in industrial settings.

Keywords: lithium-ion battery, remaining useful life, complete EEMD with adaptive noise, temporal convolutional net

Procedia PDF Downloads 109
593 Preparation and Characterization of Organic Silver Precursors for Conductive Ink

Authors: Wendong Yang, Changhai Wang, Valeria Arrighi

Abstract:

Low ink sintering temperature is desired for flexible electronics, as it would widen the application of the ink on temperature-sensitive substrates where the selection of silver precursor is very critical. In this paper, four types of organic silver precursors, silver carbonate, silver oxalate, silver tartrate and silver itaconate, were synthesized using an ion exchange method, firstly. Various characterization methods were employed to investigate their physical phase, chemical composition, morphologies and thermal decomposition behavior. It was found that silver oxalate had the ideal thermal property and showed the lowest decomposition temperature. An ink was then formulated by complexing the as-prepared silver oxalate with ethylenediamine in organic solvents. Results show that a favorable conductive film with a uniform surface structure consisting of silver nanoparticles and few voids could be produced from the ink at a sintering temperature of 150 °C.

Keywords: conductive ink, electrical property, film, organic silver

Procedia PDF Downloads 301
592 Forecasting Amman Stock Market Data Using a Hybrid Method

Authors: Ahmad Awajan, Sadam Al Wadi

Abstract:

In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.

Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series

Procedia PDF Downloads 98
591 A Geometric Interpolation Scheme in Overset Meshes for the Piecewise Linear Interface Calculation Volume of Fluid Method in Multiphase Flows

Authors: Yanni Chang, Dezhi Dai, Albert Y. Tong

Abstract:

Piecewise linear interface calculation (PLIC) schemes are widely used in the volume-of-fluid (VOF) method to capture interfaces in numerical simulations of multiphase flows. Dynamic overset meshes can be especially useful in applications involving component motions and complex geometric shapes. In the present study, the VOF value of an acceptor cell is evaluated in a geometric way that transfers the fraction field between the meshes precisely with reconstructed interfaces from the corresponding donor elements. The acceptor cell value is evaluated by using a weighted average of its donors for most of the overset interpolation schemes for continuous flow variables. The weighting factors are obtained by different algebraic methods. Unlike the continuous flow variables, the VOF equation is a step function near the interfaces, which ranges from zero to unity rapidly. A geometric interpolation scheme of the VOF field in overset meshes for the PLIC-VOF method has been proposed in the paper. It has been tested successfully in quadrilateral/hexahedral overset meshes by employing several VOF advection tests with imposed solenoidal velocity fields. The proposed algorithm has been shown to yield higher accuracy in mass conservation and interface reconstruction compared with three other algebraic ones.

Keywords: interpolation scheme, multiphase flows, overset meshes, PLIC-VOF method

Procedia PDF Downloads 139
590 Annular Hyperbolic Profile Fins with Variable Thermal Conductivity Using Laplace Adomian Transform and Double Decomposition Methods

Authors: Yinwei Lin, Cha'o-Kuang Chen

Abstract:

In this article, the Laplace Adomian transform method (LADM) and double decomposition method (DDM) are used to solve the annular hyperbolic profile fins with variable thermal conductivity. As the thermal conductivity parameter ε is relatively large, the numerical solution using DDM become incorrect. Moreover, when the terms of DDM are more than seven, the numerical solution using DDM is very complicated. However, the present method can be easily calculated as terms are over seven and has more precisely numerical solutions. As the thermal conductivity parameter ε is relatively large, LADM also has better accuracy than DDM.

Keywords: fins, thermal conductivity, Laplace transform, Adomian, nonlinear

Procedia PDF Downloads 305
589 Urban-Rural Inequality in Mexico after Nafta: A Quantile Regression Analysis

Authors: Rene Valdiviezo-Issa

Abstract:

In this paper, we use Mexico’s Households Income and Expenditures (ENIGH) survey to explain the behaviour that the urban-rural expenditure gap has had since Mexico’s incorporation to the North American Free Trade Agreement (NAFTA) in 1994 and we compare it with the latest available survey, which took place in 2014. We use real trimestral expenditure per capita (RTEPC) as the measure of welfare. We use quantile regressions and a quantile regression decomposition to describe the gap between urban and rural distributions of log RTEPC. We discover that the decrease in the difference between the urban and rural distributions of log RTEPC, or inequality, is motivated because of a deprivation of the urban areas, in very specific characteristics, rather than an improvement of the urban areas. When using the decomposition we observe that the gap is primarily brought about because differences in returns to covariates between the urban and rural areas.

Keywords: quantile regression, urban-rural inequality, inequality in Mexico, income decompositon

Procedia PDF Downloads 255
588 Characterization of Enhanced Thermostable Polyhydroxyalkanoates

Authors: Ahmad Idi

Abstract:

The biosynthesis and properties of polyhydroxyalkanoate (PHA) are determined by the bacterial strain and the culture condition. Hence this study elucidates the structure and properties of PHA produced by a newly isolated strain of photosynthetic bacterium, Rhodobacter sphaeroides ADZ101 grown under the optimized culture condition. The properties of the accumulated PHA were determined via FTIR, NMR, TGA, and GCMS analyses. The results showed that acetate and ammonia chloride had the highest PHA accumulation with a ratio of 32.5 mM at neutral pH. The structural analyses showed that the polymer comprises both short and medium-chain length monomers ranging from C5, C13, C14, and C18, as well as the presence of novel PHA monomers. The thermal analysis revealed that the maximum temperature of decomposition occurred at 395°C and 454°C, indicating two major decomposition reactions. Thus this bacterial strain, optimized culture condition, and the abundance of novel monomers enhanced the thermostability of the accumulated PHA.

Keywords: bioplastic polyhydroxyalkanoates Rhodobacter sphaeroides ADZ101 thermostable PHA

Procedia PDF Downloads 116
587 Preparation and Characterization of Titania-Coated Glass Fibrous Filters Using Aqueous Peroxotitanium Acid Solution

Authors: Ueda Honoka, Yasuo Hasegawa, Fumihiro Nishimura, Jae-Ho Kim, Susumu Yonezawa

Abstract:

Aqueous peroxotitanium acid solution prepared from the TiO₂ fluorinated by F₂ gas was used for the TiO₂ coating on glass fibrous filters in this study. The coating of TiO₂ on the surface of glass fibers was carried out at 120℃ and for 15 min ~ 24 h with aqueous peroxotitanium acid solution using a hydrothermal synthesis autoclave reactor. The morphology TiO₂ coating layer was largely dependent on the reaction time, as shown in the results of scanning electron microscopy and energy dispersive X-ray spectroscopy. Increasing the reaction times, the TiO₂ layer on the glass expanded uniformly. Moreover, the surface fluorination of glass fibers can promote the formation of the TiO₂ layer on the surface. The photocatalytic activity of prepared titania-coated glass fibrous filters was investigated by both the degradation test of methylene blue (MB) and the decomposition test of gaseous acetaldehyde. The MB decomposition ratio with fluorinated samples was about 95% for 30 min of UV irradiation time, and it was much higher than that (70%) with the untreated thing. The decomposition ratio (50%) of gaseous acetaldehyde with fluorinated samples was also higher than that (30%) with the untreated thing. Consequently, photocatalytic activity is enhanced by surface fluorination.

Keywords: aqueous peroxotitanium acid solution, titania-coated glass fibrous filters, photocatalytic activity, surface fluorination

Procedia PDF Downloads 57
586 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 280
585 Solutions of Fractional Reaction-Diffusion Equations Used to Model the Growth and Spreading of Biological Species

Authors: Kamel Al-Khaled

Abstract:

Reaction-diffusion equations are commonly used in population biology to model the spread of biological species. In this paper, we propose a fractional reaction-diffusion equation, where the classical second derivative diffusion term is replaced by a fractional derivative of order less than two. Based on the symbolic computation system Mathematica, Adomian decomposition method, developed for fractional differential equations, is directly extended to derive explicit and numerical solutions of space fractional reaction-diffusion equations. The fractional derivative is described in the Caputo sense. Finally, the recent appearance of fractional reaction-diffusion equations as models in some fields such as cell biology, chemistry, physics, and finance, makes it necessary to apply the results reported here to some numerical examples.

Keywords: fractional partial differential equations, reaction-diffusion equations, adomian decomposition, biological species

Procedia PDF Downloads 340
584 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 130
583 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 226
582 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas

Abstract:

The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.

Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization

Procedia PDF Downloads 501
581 Analysis of the Significance of Multimedia Channels Using Sparse PCA and Regularized SVD

Authors: Kourosh Modarresi

Abstract:

The abundance of media channels and devices has given users a variety of options to extract, discover, and explore information in the digital world. Since, often, there is a long and complicated path that a typical user may venture before taking any (significant) action (such as purchasing goods and services), it is critical to know how each node (media channel) in the path of user has contributed to the final action. In this work, the significance of each media channel is computed using statistical analysis and machine learning techniques. More specifically, “Regularized Singular Value Decomposition”, and “Sparse Principal Component” has been used to compute the significance of each channel toward the final action. The results of this work are a considerable improvement compared to the present approaches.

Keywords: multimedia attribution, sparse principal component, regularization, singular value decomposition, feature significance, machine learning, linear systems, variable shrinkage

Procedia PDF Downloads 280
580 Variable Tree Structure QR Decomposition-M Algorithm (QRD-M) in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Systems

Authors: Jae-Hyun Ro, Jong-Kwang Kim, Chang-Hee Kang, Hyoung-Kyu Song

Abstract:

In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, QR decomposition-M algorithm (QRD-M) has suboptimal error performance. However, the QRD-M has still high complexity due to many calculations at each layer in tree structure. To reduce the complexity of the QRD-M, proposed QRD-M modifies existing tree structure by eliminating unnecessary candidates at almost whole layers. The method of the elimination is discarding the candidates which have accumulated squared Euclidean distances larger than calculated threshold. The simulation results show that the proposed QRD-M has same bit error rate (BER) performance with lower complexity than the conventional QRD-M.

Keywords: complexity, MIMO-OFDM, QRD-M, squared Euclidean distance

Procedia PDF Downloads 305
579 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams

Procedia PDF Downloads 392
578 Hydrogen Production Through Thermocatalytic Decomposition of Methane Over Biochar

Authors: Seyed Mohamad Rasool Mirkarimi, David Chiaramonti, Samir Bensaid

Abstract:

Catalytic methane decomposition (CMD, reaction 4) is a one-step process for hydrogen production where carbon in the methane molecule is sequestered in the form of stable and higher-value carbon materials. Metallic catalysts and carbon-based catalysts are two major types of catalysts utilized for the CDM process. Although carbon-based catalysts have lower activity compared to metallic ones, they are less expensive and offer high thermal stability and strong resistance to chemical impurities such as sulfur. Also, it would require less costly separation methods as some of the carbon-based catalysts may not have an active metal component in them. Since the regeneration of metallic catalysts requires burning of the C on their surfaces, which emits CO/CO2, in some cases, using carbon-based catalysts would be recommended because regeneration can be completely avoided, and the catalyst can be directly used in other processes. This work focuses on the effect of biochar as a carbon-based catalyst for the conversion of methane into hydrogen and carbon. Biochar produced from the pyrolysis of poplar wood and activated biochar are used as catalysts for this process. In order to observe the impact of carbon-based catalysts on methane conversion, methane cracking in the absence and presence of catalysts for a gas stream with different levels of methane concentration should be performed. The results of these experiments prove conversion of methane in the absence of catalysts at 900 °C is negligible, whereas in the presence of biochar and activated biochar, significant growth has been observed. Comparing the results of the tests related to using char and activated char shows the enhancement obtained in BET surface area of the catalyst through activation leads to more than 10 vol.% methane conversion.

Keywords: hydrogen production, catalytic methane decomposition, biochar, activated biochar, carbon-based catalyts

Procedia PDF Downloads 49
577 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 310
576 Singular Value Decomposition Based Optimisation of Design Parameters of a Gearbox

Authors: Mehmet Bozca

Abstract:

Singular value decomposition based optimisation of geometric design parameters of a 5-speed gearbox is studied. During the optimisation, a four-degree-of freedom torsional vibration model of the pinion gear-wheel gear system is obtained and the minimum singular value of the transfer matrix is considered as the objective functions. The computational cost of the associated singular value problems is quite low for the objective function, because it is only necessary to compute the largest and smallest singular values (µmax and µmin) that can be achieved by using selective eigenvalue solvers; the other singular values are not needed. The design parameters are optimised under several constraints that include bending stress, contact stress and constant distance between gear centres. Thus, by optimising the geometric parameters of the gearbox such as, the module, number of teeth and face width it is possible to obtain a light-weight-gearbox structure. It is concluded that the all optimised geometric design parameters also satisfy all constraints.

Keywords: Singular value, optimisation, gearbox, torsional vibration

Procedia PDF Downloads 328