Search results for: linear decomposition methods
18268 Hazardous Effects of Metal Ions on the Thermal Stability of Hydroxylammonium Nitrate
Authors: Shweta Hoyani, Charlie Oommen
Abstract:
HAN-based liquid propellants are perceived as potential substitute for hydrazine in space propulsion. Storage stability for long service life in orbit is one of the key concerns for HAN-based monopropellants because of its reactivity with metallic and non-metallic impurities which could entrain from the surface of fuel tanks and the tubes. The end result of this reactivity directly affects the handling, performance and storability of the liquid propellant. Gaseous products resulting from the decomposition of the propellant can lead to deleterious pressure build up in storage vessels. The partial loss of an energetic component can change the ignition and the combustion behavior and alter the performance of the thruster. The effect of largely plausible metals- iron, copper, chromium, nickel, manganese, molybdenum, zinc, titanium and cadmium on the thermal decomposition mechanism of HAN has been investigated in this context. Studies involving different concentrations of metal ions and HAN at different preheat temperatures have been carried out. Effect of metal ions on the decomposition behavior of HAN has been studied earlier in the context of use of HAN as gun propellant. However the current investigation pertains to the decomposition mechanism of HAN in the context of use of HAN as monopropellant for space propulsion. Decomposition onset temperature, rate of weight loss, heat of reaction were studied using DTA- TGA and total pressure rise and rate of pressure rise during decomposition were evaluated using an in-house built constant volume batch reactor. Besides, reaction mechanism and product profile were studied using TGA-FTIR setup. Iron and copper displayed the maximum reaction. Initial results indicate that iron and copper shows sensitizing effect at concentrations as low as 50 ppm with 60% HAN solution at 80°C. On the other hand 50 ppm zinc does not display any effect on the thermal decomposition of even 90% HAN solution at 80°C.Keywords: hydroxylammonium nitrate, monopropellant, reaction mechanism, thermal stability
Procedia PDF Downloads 42018267 Empirical Mode Decomposition Based Denoising by Customized Thresholding
Authors: Wahiba Mohguen, Raïs El’hadi Bekka
Abstract:
This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding
Procedia PDF Downloads 30018266 Pyrolysis and Combustion Kinetics of Palm Kernel Shell Using Thermogravimetric Analysis
Authors: Kanit Manatura
Abstract:
The combustion and pyrolysis behavior of Palm Kernel Shell (PKS) were investigated in a thermogravimetric analyzer. A 10 mg sample of each biomass was heated from 30 °C to 800 °C at four heating rates (within 5, 10, 15 and 30 °C/min) in nitrogen and dry air flow of 20 ml/min instead of pyrolysis and combustion process respectively. During pyrolysis, thermal decomposition occurred on three different stages include dehydration, hemicellulose-cellulose and lignin decomposition on each temperature range. The TG/DTG curves showed the degradation behavior and the pyrolysis/combustion characteristics of the PKS samples which led to apply in thermogravimetric analysis. The kinetic factors including activation energy and pre-exponential factor were determined by the Coats-Redfern method. The obtained kinetic factors are used to simulate the thermal decomposition and compare with experimental data. Rising heating rate leads to shift the mass loss towards higher temperature.Keywords: combustion, palm kernel shell, pyrolysis, thermogravimetric analyzer
Procedia PDF Downloads 22518265 Short-Term Load Forecasting Based on Variational Mode Decomposition and Least Square Support Vector Machine
Authors: Jiangyong Liu, Xiangxiang Xu, Bote Luo, Xiaoxue Luo, Jiang Zhu, Lingzhi Yi
Abstract:
To address the problems of non-linearity and high randomness of the original power load sequence causing the degradation of power load forecasting accuracy, a short-term load forecasting method is proposed. The method is based on the Least Square Support Vector Machine optimized by an Improved Sparrow Search Algorithm combined with the Variational Mode Decomposition proposed in this paper. The application of the variational mode decomposition technique decomposes the raw power load data into a series of Intrinsic Mode Functions components, which can reduce the complexity and instability of the raw data while overcoming modal confounding; the proposed improved sparrow search algorithm can solve the problem of difficult selection of learning parameters in the least Square Support Vector Machine. Finally, through comparison experiments, the results show that the method can effectively improve prediction accuracy.Keywords: load forecasting, variational mode decomposition, improved sparrow search algorithm, least square support vector machine
Procedia PDF Downloads 10518264 Wavelet-Based Classification of Myocardial Ischemia, Arrhythmia, Congestive Heart Failure and Sleep Apnea
Authors: Santanu Chattopadhyay, Gautam Sarkar, Arabinda Das
Abstract:
This paper presents wavelet based classification of various heart diseases. Electrocardiogram signals of different heart patients have been studied. Statistical natures of electrocardiogram signals for different heart diseases have been compared with the statistical nature of electrocardiograms for normal persons. Under this study four different heart diseases have been considered as follows: Myocardial Ischemia (MI), Congestive Heart Failure (CHF), Arrhythmia and Sleep Apnea. Statistical nature of electrocardiograms for each case has been considered in terms of kurtosis values of two types of wavelet coefficients: approximate and detail. Nine wavelet decomposition levels have been considered in each case. Kurtosis corresponding to both approximate and detail coefficients has been considered for decomposition level one to decomposition level nine. Based on significant difference, few decomposition levels have been chosen and then used for classification.Keywords: arrhythmia, congestive heart failure, discrete wavelet transform, electrocardiogram, myocardial ischemia, sleep apnea
Procedia PDF Downloads 13118263 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition
Authors: Latha Subbiah, Dhanalakshmi Samiappan
Abstract:
In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.Keywords: curvelet, decomposition, levelset, ultrasound
Procedia PDF Downloads 33918262 Parallelization by Domain Decomposition for 1-D Sugarcane Equation with Message Passing Interface
Authors: Ewedafe Simon Uzezi
Abstract:
In this paper we presented a method based on Domain Decomposition (DD) for parallelization of 1-D Sugarcane Equation on parallel platform with parallel paradigms on Master-Slave platform using Message Passing Interface (MPI). The 1-D Sugarcane Equation was discretized using explicit method of discretization requiring evaluation nof temporal and spatial distribution of temperature. This platform gives better predictions of the effects of temperature distribution of the sugarcane problem. This work presented parallel overheads with overlapping communication and communication across parallel computers with numerical results across different block sizes with scalability. However, performance improvement strategies from the DD on various mesh sizes were compared experimentally and parallel results show speedup and efficiency for the parallel algorithms design.Keywords: sugarcane, parallelization, explicit method, domain decomposition, MPI
Procedia PDF Downloads 1918261 Powers of Class p-w A (s, t) Operators Associated with Generalized Aluthge Transformations
Authors: Mohammed Husein Mohammed Rashid
Abstract:
Let Τ = U |Τ| be a polar decomposition of a bounded linear operator T on a complex Hilbert space with ker U = ker |T|. T is said to be class p-w A(s,t) if (|T*|ᵗ|T|²ˢ|T*|ᵗ )ᵗᵖ/ˢ⁺ᵗ ≥|T*|²ᵗᵖ and |T|²ˢᵖ ≥ (|T|ˢ|T*|²ᵗ|T|ˢ)ˢᵖ/ˢ⁺ᵗ with 0Keywords: class p-w A (s, t), normaloid, isoloid, finite, orthogonality
Procedia PDF Downloads 11518260 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations
Authors: Kuniyoshi Abe
Abstract:
Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant
Procedia PDF Downloads 16018259 Continuous-Time and Discrete-Time Singular Value Decomposition of an Impulse Response Function
Authors: Rogelio Luck, Yucheng Liu
Abstract:
This paper proposes the continuous-time singular value decomposition (SVD) for the impulse response function, a special kind of Green’s functions e⁻⁽ᵗ⁻ ᵀ⁾, in order to find a set of singular functions and singular values so that the convolutions of such function with the set of singular functions on a specified domain are the solutions to the inhomogeneous differential equations for those singular functions. A numerical example was illustrated to verify the proposed method. Besides the continuous-time SVD, a discrete-time SVD is also presented for the impulse response function, which is modeled using a Toeplitz matrix in the discrete system. The proposed method has broad applications in signal processing, dynamic system analysis, acoustic analysis, thermal analysis, as well as macroeconomic modeling.Keywords: singular value decomposition, impulse response function, Green’s function , Toeplitz matrix , Hankel matrix
Procedia PDF Downloads 15418258 Bipolar Impulse Noise Removal and Edge Preservation in Color Images and Video Using Improved Kuwahara Filter
Authors: Reji Thankachan, Varsha PS
Abstract:
Both image capturing devices and human visual systems are nonlinear. Hence nonlinear filtering methods outperforms its linear counterpart in many applications. Linear methods are unable to remove impulsive noise in images by preserving its edges and fine details. In addition, linear algorithms are unable to remove signal dependent or multiplicative noise in images. This paper presents an approach to denoise and smoothen the Bipolar impulse noised images and videos using improved Kuwahara filter. It involves a 2 stage algorithm which includes a noise detection followed by filtering. Numerous simulation demonstrate that proposed method outperforms the existing method by eliminating the painting like flattening effect along the local feature direction while preserving edge with improvement in PSNR and MSE.Keywords: bipolar impulse noise, Kuwahara, PSNR MSE, PDF
Procedia PDF Downloads 49718257 A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30dB SNR as a reference for voice activity.Keywords: atomic decomposition, gabor, gammatone, matching pursuit, voice activity detection
Procedia PDF Downloads 28918256 Co-Factors of Hypertension and Decomposition of Inequalities in Its Prevalence in India: Evidence from NFHS-4
Authors: Ayantika Biswas
Abstract:
Hypertension still remains one of the most important preventable contributors to adult mortality and morbidity and a major public health challenge worldwide. Studying regional and rural-urban differences in prevalence and assessment of the contributions of different indicators is essential in determining the drivers of this condition. The 2015-16 National Family Health Survey data has been used for the study. Bivariate analysis, multinomial regression analysis, concentration indices and decomposition of concentration indices assessing contribution of factors has been undertaken in the present study. An overall concentration index of 0.003 has been found for hypertensive population, which shows its concentration among the richer wealth quintiles. The contribution of factors like age 45 to 49 years, years of schooling between 5 to 9 years are factors that are important contributors to inequality in hypertension occurrence. Studies should be conducted to find approaches to prevent or delay the onset of the condition.Keywords: hypertension, decomposition, inequalities, India
Procedia PDF Downloads 13918255 Carbon Supported Cu and TiO2 Catalysts Applied for Ozone Decomposition
Authors: Katya Milenova, Penko Nikolov, Irina Stambolova, Plamen Nikolov, Vladimir Blaskov
Abstract:
In the recent article, a comparison was made between Cu and TiO2 supported catalysts on activated carbon for ozone decomposition reaction. The activated carbon support in the case of TiO2/AC sample was prepared by physicochemical pyrolysis and for Cu/AC samples the supports are chemically modified carbons. The prepared catalysts were synthesized by impregnation method. The samples were annealed in two different regimes-in air and under vacuum. To examine adsorption efficiency of the samples BET method was used. All investigated catalysts supported on chemically modified carbons have higher specific surface area compared to the specific surface area of TiO2 supported catalysts, varying in the range 590÷620 m2/g. The method of synthesis of the precursors had influenced catalytic activity.Keywords: activated carbon, adsorption, copper, ozone decomposition, TiO2
Procedia PDF Downloads 41518254 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method
Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang
Abstract:
Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.Keywords: Chronic Kidney Disease, Linear Regression, Microfluidics, Urinary Albumin
Procedia PDF Downloads 13318253 Homogenization of a Non-Linear Problem with a Thermal Barrier
Authors: Hassan Samadi, Mustapha El Jarroudi
Abstract:
In this work, we consider the homogenization of a non-linear problem in periodic medium with two periodic connected media exchanging a heat flux throughout their common interface. The interfacial exchange coefficient λ is assumed to tend to zero or to infinity following a rate λ=λ(ε) when the size ε of the basic cell tends to zero. Three homogenized problems are determined according to some critical value depending of λ and ε. Our method is based on Γ-Convergence techniques.Keywords: variational methods, epiconvergence, homogenization, convergence technique
Procedia PDF Downloads 52318252 Synthesis of Nickel Oxide Nanoparticles in Presence of Sodium Dodecyl Sulphate
Authors: Fereshteh Chekin, Sepideh Sadeghi
Abstract:
Nickel nanoparticles have attracted much attention because of applications in catalysis, medical diagnostics and magnetic applications. In this work, we reported a simple and low-cost procedure to synthesize nickel oxide nanoparticles (NiO-NPs) by using sodium dodecyl sulphate (SDS) and gelatin as stabilizer. The synthesized NiO-NPs were characterized by a variety of means such as transmission electron microscope (TEM), powder X-ray diffraction (XRD), scanning electron microscope (SEM) and UV-vis spectroscopy. The results show that the NiO nanoparticles with high crystalline can be obtained using this simple method. The grain size measured by TEM was 16 in presence of SDS, which agrees well with the XRD data. SDS plays an important role in the formation of the NiO nanoparticles. Moreover, the NiO nanoparticles have been used as a solid phase catalyst for the decomposition of hydrazine hydrate at room temperatures. The decomposition process has been monitored by UV–vis analysis. The present study showed that nanoparticles are not poisoned after their repeated use in decomposition of hydrazine.Keywords: nickel oxide nanoparticles, sodium dodecyl sulphate, synthesis, stabilizer
Procedia PDF Downloads 48218251 An Empirical Approach to NO2 Gas Sensing Properties of Carbon Films Fabricated by Arc Discharge Methane Decomposition Technique
Authors: Elnaz Akbari, Zolkafle Buntat
Abstract:
Today, the use of carbon-based materials such as graphene, carbon nanotubes, etc. in various applications is being extensively studied by researchers in the field. One of such applications is using them in gas sensors. While analytical investigations on the physical and chemical properties of carbon nanomaterials are the focal points in the studies, the need for experimental measurements on various physical characteristics of these materials is deeply felt. In this work, a set of experiments has been conducted using arc discharge Methane decomposition attempting to obtain carbonaceous materials (C-strands) formed between graphite electrodes. The current-voltage (I-V) characteristics of the fabricated C-strands have been investigated in the presence and absence of two different gases, NO2 and CO2. The results reveal that the current passing through the carbon films increases when the concentrations of gases are increased from 200 to 800 ppm. This phenomenon is a result of conductance changes and can be employed in sensing applications such as gas sensors.Keywords: carbonaceous materials, gas sensing, methane arc discharge decomposition, I-V characteristics
Procedia PDF Downloads 21518250 BI- And Tri-Metallic Catalysts for Hydrogen Production from Hydrogen Iodide Decomposition
Authors: Sony, Ashok N. Bhaskarwar
Abstract:
Production of hydrogen from a renewable raw material without any co-synthesis of harmful greenhouse gases is the current need for sustainable energy solutions. The sulfur-iodine (SI) thermochemical cycle, using intermediate chemicals, is an efficient process for producing hydrogen at a much lower temperature than that required for the direct splitting of water. No net byproduct forms in the cycle. Hydrogen iodide (HI) decomposition is a crucial reaction in this cycle, as the product, hydrogen, forms only in this step. It is an endothermic, reversible, and equilibrium-limited reaction. The theoretical equilibrium conversion at 550°C is just a meagre of 24%. There is a growing interest, therefore, in enhancing the HI conversion to near-equilibrium values at lower reaction temperatures and by possibly improving the rate. The reaction is relatively slow without a catalyst, and hence catalytic decomposition of HI has gained much significance. Bi-metallic Ni-Co, Ni-Mn, Co-Mn, and tri-metallic Ni-Co-Mn catalysts over zirconia support were tested for HI decomposition reaction. The catalysts were synthesized via a sol-gel process wherein Ni was 3wt% in all the samples, and Co and Mn had equal weight ratios in the Co-Mn catalyst. Powdered X-ray diffraction and Brunauer-Emmett-Teller surface area characterizations indicated the polycrystalline nature and well-developed mesoporous structure of all the samples. The experiments were performed in a vertical laboratory-scale packed bed reactor made of quartz, and HI (55 wt%) was fed along with nitrogen at a WHSV of 12.9 hr⁻¹. Blank experiments at 500°C for HI decomposition suggested conversion of less than 5%. The activities of all the different catalysts were checked at 550°C, and the highest conversion of 23.9% was obtained with the tri-metallic 3Ni-Co-Mn-ZrO₂ catalyst. The decreasing order of the performance of catalysts could be expressed as: 3Ni-Co-Mn-ZrO₂ > 3Ni-2Co-ZrO₂ > 3Ni-2Mn-ZrO₂ > 2.5Co-2.5Mn-ZrO₂. The tri-metallic catalyst remained active till 360 mins at 550°C without any observable drop in its activity/stability. Among the explored catalyst compositions, the tri-metallic catalyst certainly has a better performance for HI conversion when compared to the bi-metallic ones. Owing to their low costs and ease of preparation, these trimetallic catalysts could be used for large-scale hydrogen production.Keywords: sulfur-iodine cycle, hydrogen production, hydrogen iodide decomposition, bi-, and tri-metallic catalysts
Procedia PDF Downloads 18618249 Gender Gap in Returns to Social Entrepreneurship
Authors: Saul Estrin, Ute Stephan, Suncica Vujic
Abstract:
Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences
Procedia PDF Downloads 24218248 Iterative Solver for Solving Large-Scale Frictional Contact Problems
Authors: Thierno Diop, Michel Fortin, Jean Deteix
Abstract:
Since the precise formulation of the elastic part is irrelevant for the description of the algorithm, we shall consider a generic case. In practice, however, we will have to deal with a non linear material (for instance a Mooney-Rivlin model). We are interested in solving a finite element approximation of the problem, leading to large-scale non linear discrete problems and, after linearization, to large linear systems and ultimately to calculations needing iterative methods. This also implies that penalty method, and therefore augmented Lagrangian method, are to be banned because of their negative effect on the condition number of the underlying discrete systems and thus on the convergence of iterative methods. This is in rupture to the mainstream of methods for contact in which augmented Lagrangian is the principal tool. We shall first present the problem and its discretization; this will lead us to describe a general solution algorithm relying on a preconditioner for saddle-point problems which we shall describe in some detail as it is not entirely standard. We will propose an iterative approach for solving three-dimensional frictional contact problems between elastic bodies, including contact with a rigid body, contact between two or more bodies and also self-contact.Keywords: frictional contact, three-dimensional, large-scale, iterative method
Procedia PDF Downloads 20918247 Effect of Resveratrol and Ascorbic Acid on the Stability of Alfa-Tocopherol in Whey Protein Isolate Stabilized O/W Emulsions
Authors: Lei Wang, Yingzhou Ni, Amr M. Bakry, Hao Cheng, Li Liang
Abstract:
Food proteins have been widely used as carrier materials because of their multiple functional properties. In this study, alfa-tocopherol was encapsulated in the oil phase of an oil-in-water emulsion stabilized with whey protein isolate (WPI). The influence of WPI concentration and resveratrol or ascorbic acid on the decomposition of alfa-tocopherol in the emulsion during storage is discussed. Decomposition decreased as WPI concentrations increased. Decomposition was delayed at ascorbic acid/WPI molar ratios lower than 5 but was promoted at higher ratios. Resveratrol partitioned into the oil-water interface by binding to WPI and its cis-isomer is believed to have contributed most of the protective effect of this polyphenol. These results suggest the possibility of using the emulsifying and ligand-binging properties of WPI to produce carriers for simultaneous encapsulation of alfa-tocopherol and resveratrol in a single emulsion system.Keywords: stability, alfa-tocopherol, resveratrol, whey protein isolate
Procedia PDF Downloads 52618246 The Wage Differential between Migrant and Native Workers in Australia: Decomposition Approach
Authors: Sabrina Tabassum
Abstract:
Using Census Data for Housing and Population of Australia 2001, 2006, 2011, and 2016, this paper shows the existence of wage differences between natives and immigrants in Australia. Addressing the heterogeneous nature of immigrants, this study group the immigrants in three broad categories- migrants from English speaking countries and migrants from India and China. Migrants from English speaking countries and India earn more than the natives per week, whereas migrants from China earn far less than the natives per week. Oaxaca decomposition suggests that major part of this differential is unexplained. Using the occupational segregation concept and Brown decomposition, this study indicates that migrants from India and China would have been earned more than the natives if they had the same occupation distribution as natives due to their individual characteristics. Within occupation, wage differences are more prominent than inter-occupation wage differences for immigrants from China and India.Keywords: Australia, labour, migration, wage
Procedia PDF Downloads 12218245 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 13618244 Effects of Small Impoundments on Leaf Litter Decomposition and Methane Derived Carbon in the Benthic Foodweb in Streams
Authors: John Gichimu Mbaka, Jan Helmrich Martin von Baumbach, Celia Somlai, Denis Köpfer, Andreas Maeck, Andreas Lorke, Ralf Schäfer
Abstract:
Leaf litter decomposition is an important process providing energy to biotic communities. Additionally, methane gas (CH4) has been identified as an important alternative source of carbon and energy in some freshwater food webs.Flow regulation and dams can strongly alter freshwater ecosystems, but little is known about the effect of small impoundments on leaf litter decomposition and methane derived carbon in streams. In this study, we tested the effect of small water storage impoundments on leaf litter decomposition rates and methane derived carbon. Leaf litter decomposition rates were assessed by comparing treatment sites located close to nine impoundments (Rheinland Pfalz state, Germany) and reference sites located far away from the impoundments.CH4 concentrations were measured in eleven impoundments and correlated with the δ13C values of two subfamilies of chironomid larvae (i.e. Chironomini and Tanypodinae). Leaf litter break down rates were significantly lower in study sites located immediately above the impoundments, especially associated with a reduction in the abundance of shredders. Chironomini larvae had the lower mean δ13C values (‒29.2 to ‒25.5 ‰), than Tanypodinae larvae (‒26.9 to ‒25.3 ‰).No significant relationships were established between CH4 concentrations and δ13C values of chironomids (p> 0.05).Mean δ13C values of chironomid larvae (mean: ‒26.8‰, range: ‒ 29.2‰ to ‒ 25.3‰) were similar to those of sedimentary organic matter (SOM) (mean: ‒28.4‰, range: ‒ 29.3‰ to ‒ 27.1‰) and tree leaf litter (mean: ‒29.8 ‰, range: ‒ 30.5‰ to ‒ 29.1‰). In conclusion, this study demonstrates that small impoundments may have a negative effect on leaf litter decomposition in forest streams and that CH4 has limited influence on the benthic food web in stream impoundments.Keywords: river functioning, chironomids, Alder tree, stable isotopes, methane oxidation, shredder
Procedia PDF Downloads 73118243 Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD
Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi
Abstract:
Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.Keywords: FSDWT, key frame extraction, shot detection, singular value decomposition
Procedia PDF Downloads 39518242 Functional Decomposition Based Effort Estimation Model for Software-Intensive Systems
Authors: Nermin Sökmen
Abstract:
An effort estimation model is needed for software-intensive projects that consist of hardware, embedded software or some combination of the two, as well as high level software solutions. This paper first focuses on functional decomposition techniques to measure functional complexity of a computer system and investigates its impact on system development effort. Later, it examines effects of technical difficulty and design team capability factors in order to construct the best effort estimation model. With using traditional regression analysis technique, the study develops a system development effort estimation model which takes functional complexity, technical difficulty and design team capability factors as input parameters. Finally, the assumptions of the model are tested.Keywords: functional complexity, functional decomposition, development effort, technical difficulty, design team capability, regression analysis
Procedia PDF Downloads 29118241 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis
Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera
Abstract:
Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.Keywords: log-linear model, multi spectral, residuals, spatial error model
Procedia PDF Downloads 29518240 Second Order Analysis of Frames Using Modified Newmark Method
Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi
Abstract:
The main purpose of this paper is to present the Modified Newmark Method as a method of non-linear frame analysis by considering the effect of the axial load (second order analysis). The discussion will be restricted to plane frameworks containing a constant cross-section for each element. In addition, it is assumed that the frames are prevented from out-of-plane deflection. This part of the investigation is performed to generalize the established method for the assemblage structures such as frameworks. As explained, the governing differential equations are non-linear and cannot be formulated easily due to unknown axial load of the struts in the frame. By the assumption of constant axial load, the governing equations are changed to linear ones in most methods. Since the modeling and the solutions of the non-linear form of the governing equations are cumbersome, the linear form of the equations would be used in the established method. However, according to the ability of the method to reconsider the minor omitted parameters in modeling during the solution procedure, the axial load in the elements at each stage of the iteration can be computed and applied in the next stage. Therefore, the ability of the method to present an accurate approach to the solutions of non-linear equations will be demonstrated again in this paper.Keywords: nonlinear, stability, buckling, modified newmark method
Procedia PDF Downloads 42418239 Fuzzy Linear Programming Approach for Determining the Production Amounts in Food Industry
Abstract:
In recent years, rapid and correct decision making is crucial for both people and enterprises. However, uncertainty makes decision-making difficult. Fuzzy logic is used for coping with this situation. Thus, fuzzy linear programming models are developed in order to handle uncertainty in objective function and the constraints. In this study, a problem of a factory in food industry is investigated, required data is obtained and the problem is figured out as a fuzzy linear programming model. The model is solved using Zimmerman approach which is one of the approaches for fuzzy linear programming. As a result, the solution gives the amount of production for each product type in order to gain maximum profit.Keywords: food industry, fuzzy linear programming, fuzzy logic, linear programming
Procedia PDF Downloads 649