Search results for: factorial decomposition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 921

Search results for: factorial decomposition

801 Annular Hyperbolic Profile Fins with Variable Thermal Conductivity Using Laplace Adomian Transform and Double Decomposition Methods

Authors: Yinwei Lin, Cha'o-Kuang Chen

Abstract:

In this article, the Laplace Adomian transform method (LADM) and double decomposition method (DDM) are used to solve the annular hyperbolic profile fins with variable thermal conductivity. As the thermal conductivity parameter ε is relatively large, the numerical solution using DDM become incorrect. Moreover, when the terms of DDM are more than seven, the numerical solution using DDM is very complicated. However, the present method can be easily calculated as terms are over seven and has more precisely numerical solutions. As the thermal conductivity parameter ε is relatively large, LADM also has better accuracy than DDM.

Keywords: fins, thermal conductivity, Laplace transform, Adomian, nonlinear

Procedia PDF Downloads 311
800 Urban-Rural Inequality in Mexico after Nafta: A Quantile Regression Analysis

Authors: Rene Valdiviezo-Issa

Abstract:

In this paper, we use Mexico’s Households Income and Expenditures (ENIGH) survey to explain the behaviour that the urban-rural expenditure gap has had since Mexico’s incorporation to the North American Free Trade Agreement (NAFTA) in 1994 and we compare it with the latest available survey, which took place in 2014. We use real trimestral expenditure per capita (RTEPC) as the measure of welfare. We use quantile regressions and a quantile regression decomposition to describe the gap between urban and rural distributions of log RTEPC. We discover that the decrease in the difference between the urban and rural distributions of log RTEPC, or inequality, is motivated because of a deprivation of the urban areas, in very specific characteristics, rather than an improvement of the urban areas. When using the decomposition we observe that the gap is primarily brought about because differences in returns to covariates between the urban and rural areas.

Keywords: quantile regression, urban-rural inequality, inequality in Mexico, income decompositon

Procedia PDF Downloads 261
799 Characterization of Enhanced Thermostable Polyhydroxyalkanoates

Authors: Ahmad Idi

Abstract:

The biosynthesis and properties of polyhydroxyalkanoate (PHA) are determined by the bacterial strain and the culture condition. Hence this study elucidates the structure and properties of PHA produced by a newly isolated strain of photosynthetic bacterium, Rhodobacter sphaeroides ADZ101 grown under the optimized culture condition. The properties of the accumulated PHA were determined via FTIR, NMR, TGA, and GCMS analyses. The results showed that acetate and ammonia chloride had the highest PHA accumulation with a ratio of 32.5 mM at neutral pH. The structural analyses showed that the polymer comprises both short and medium-chain length monomers ranging from C5, C13, C14, and C18, as well as the presence of novel PHA monomers. The thermal analysis revealed that the maximum temperature of decomposition occurred at 395°C and 454°C, indicating two major decomposition reactions. Thus this bacterial strain, optimized culture condition, and the abundance of novel monomers enhanced the thermostability of the accumulated PHA.

Keywords: bioplastic polyhydroxyalkanoates Rhodobacter sphaeroides ADZ101 thermostable PHA

Procedia PDF Downloads 122
798 Preparation and Characterization of Titania-Coated Glass Fibrous Filters Using Aqueous Peroxotitanium Acid Solution

Authors: Ueda Honoka, Yasuo Hasegawa, Fumihiro Nishimura, Jae-Ho Kim, Susumu Yonezawa

Abstract:

Aqueous peroxotitanium acid solution prepared from the TiO₂ fluorinated by F₂ gas was used for the TiO₂ coating on glass fibrous filters in this study. The coating of TiO₂ on the surface of glass fibers was carried out at 120℃ and for 15 min ~ 24 h with aqueous peroxotitanium acid solution using a hydrothermal synthesis autoclave reactor. The morphology TiO₂ coating layer was largely dependent on the reaction time, as shown in the results of scanning electron microscopy and energy dispersive X-ray spectroscopy. Increasing the reaction times, the TiO₂ layer on the glass expanded uniformly. Moreover, the surface fluorination of glass fibers can promote the formation of the TiO₂ layer on the surface. The photocatalytic activity of prepared titania-coated glass fibrous filters was investigated by both the degradation test of methylene blue (MB) and the decomposition test of gaseous acetaldehyde. The MB decomposition ratio with fluorinated samples was about 95% for 30 min of UV irradiation time, and it was much higher than that (70%) with the untreated thing. The decomposition ratio (50%) of gaseous acetaldehyde with fluorinated samples was also higher than that (30%) with the untreated thing. Consequently, photocatalytic activity is enhanced by surface fluorination.

Keywords: aqueous peroxotitanium acid solution, titania-coated glass fibrous filters, photocatalytic activity, surface fluorination

Procedia PDF Downloads 69
797 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 287
796 Solutions of Fractional Reaction-Diffusion Equations Used to Model the Growth and Spreading of Biological Species

Authors: Kamel Al-Khaled

Abstract:

Reaction-diffusion equations are commonly used in population biology to model the spread of biological species. In this paper, we propose a fractional reaction-diffusion equation, where the classical second derivative diffusion term is replaced by a fractional derivative of order less than two. Based on the symbolic computation system Mathematica, Adomian decomposition method, developed for fractional differential equations, is directly extended to derive explicit and numerical solutions of space fractional reaction-diffusion equations. The fractional derivative is described in the Caputo sense. Finally, the recent appearance of fractional reaction-diffusion equations as models in some fields such as cell biology, chemistry, physics, and finance, makes it necessary to apply the results reported here to some numerical examples.

Keywords: fractional partial differential equations, reaction-diffusion equations, adomian decomposition, biological species

Procedia PDF Downloads 351
795 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 232
794 Examining Customer Acceptance of Chatbots in B2B Customer Service: A Factorial Survey

Authors: Kathrin Endres, Daniela Greven

Abstract:

Although chatbots are a widely known and established communication instrument in B2C customer services, B2B industries still hesitate to implement chatbots due to the incertitude of customer acceptance. While many studies examine the chatbot acceptance of B2C consumers, few studies are focusing on the B2B sector, where the customer is represented by a buying center consisting of several stakeholders. This study investigates the challenges of chatbot acceptance in B2B industries compared to challenges of chatbot acceptance from current B2C literature by interviewing experts from German chatbot vendors. The results show many similarities between the customer requirements of B2B customers and B2C consumers. Still, due to several stakeholders involved in the buying center, the features of the chatbot users are more diverse but obfuscated at the same time. Using a factorial survey, this study further examines the customer acceptance of varying situations of B2B chatbot designs based on the chatbot variables transparency, fault tolerance, complexity of products, value of products, as well as transfer to live chat service employees. The findings show that all variables influence the propensity to use the chatbot. The results contribute to a better understanding of how firms in B2B industries can design chatbots to advance their customer service and enhance customer satisfaction.

Keywords: chatbots, technology acceptance, B2B customer service, customer satisfaction

Procedia PDF Downloads 93
793 The Psychometric Properties of the Team Climate Inventory Scale: A Validation Study in Jordan’s Collectivist Society

Authors: Suhair Mereish

Abstract:

This research is aimed at examining the climate for innovation in organisations with the aim of validating the psychometric properties of the Team Climate Inventory (TCI -14) for Jordan’s collectivist society. The innovativeness of teams may be improved or obstructed by the climate within the team. Further, personal factors are considered an important element that influences the climate for innovation. Accordingly, measuring the employees' personality traits using the Big Five Inventory (BFI-44) could provide insights that aid in understanding how to improve innovation. Thus, studying the climate for innovation and its associations with personality traits is valuable, considering the insights it could offer on employee performance, job satisfaction, and well-being. Essentially, the Team Climate Inventory instrument has never been tested in Jordan’s collectivist society. Accordingly, in order to address the existing gap in the literature as a whole and, more specifically, in Jordan, it is essential to investigate its factorial structure and reliability in this particular context. It is also important to explore whether the factorial structure of the Team Climate Inventory in Jordan’s collectivist society demonstrates a similar or different structure to what has been found in individualistic ones. Lastly, examining if there are associations between the Team Climate Inventory and personality traits of Jordanian employees is pivotal. The quantitative study was carried out among Jordanian employees employed in two of the top 20 companies in Jordan, a shipping and logistics company (N=473) and a telecommunications company (N=219). To generalise the findings, this was followed by collecting data from the general population of this country (N=399). The participants completed the Team Climate Inventory. Confirmatory factor analyses and reliability tests were conducted to confirm the factorial structure, validity, and reliability of the inventory. Findings presented that the four-factor structure of the Team Climate Inventory in Jordan revealed a similar structure to the ones in Western culture. The four-factor structure has been confirmed with good fit indices and reliability values. Moreover, for climate for innovation, regression analysis identified agreeableness (positive) and neuroticism (negative) from the Big Five Inventory as significant predictors. This study will contribute to knowledge in several ways. First, by examining the reliability and factorial structure in a Jordanian collectivist context rather than a Western individualistic one. Second, by comparing the Team Climate Inventory structure in Jordan with findings for the Team Climate Inventory from Western individualistic societies. Third, by studying its relationships with personality traits in that country. Furthermore, findings from this study will assist practitioners in the field of organisational psychology and development to improve the climate for innovation for employees working in organisations in Jordan. It is also expected that the results of this research will provide recommendations to professionals in the business psychology sector regarding the characteristics of employees who hold positive and negative perceptions of the workplace climate.

Keywords: big five inventory, climate for innovation, collectivism, individualism, Jordan, team climate inventory

Procedia PDF Downloads 40
792 The Effect of Biological Fertilizers on Yield and Yield Components of Maize with Different Levels of Chemical Fertilizers in Normal and Difficit Irrigation Conditions

Authors: Felora Rafiei, Shahram Shoaei

Abstract:

The aim of this studies was to evaluate effect of nitroxin, super nitro plus and biophosphorus on yield and yield components of maize (Zea mays) under different levels of chemical fertilizers in the condition of normal and difficiet irrigation. Experiment laid out as split plot factorial based on randomized complete block design with three replications. Main plots includes two irrigation treatments of 70 (I1), 120(I2) mm evaporation from class A pan. Sub plots were biological fertilizer and chemical fertilizer as factorial biological fertilizer consisting of nitroxin: Azospirillium lipoferum, Azospirillium brasilens, Azotobacter chroococcum Azotobacter agilis (108 CFU ml-1) (B1), super nitro plus (Azospirillium spp, + Pseudomonas fluorescence + Bacillus subtilis (108 CFU ml-1) + biological fungicide) (B2), biophosphorus (Pseudomonas spp + Bacillus spp (107 CFU ml-1) (B3), and chemical fertilizer consisting of NPK (C1), N5oP5oK5o (C2) and NoPoKo (C3).The results showed that usage of biological fertilizer have positive effects on chemical fertilizers use efficiency and tolerance to drought stress in maize. Also with use of biological fertilizer can decrease usage of chemical fertilizers.

Keywords: biological fertilizer, chemical fertilizer, yield component, yield, corn

Procedia PDF Downloads 341
791 Data and Spatial Analysis for Economy and Education of 28 E.U. Member-States for 2014

Authors: Alexiou Dimitra, Fragkaki Maria

Abstract:

The objective of the paper is the study of geographic, economic and educational variables and their contribution to determine the position of each member-state among the EU-28 countries based on the values of seven variables as given by Eurostat. The Data Analysis methods of Multiple Factorial Correspondence Analysis (MFCA) Principal Component Analysis and Factor Analysis have been used. The cross tabulation tables of data consist of the values of seven variables for the 28 countries for 2014. The data are manipulated using the CHIC Analysis V 1.1 software package. The results of this program using MFCA and Ascending Hierarchical Classification are given in arithmetic and graphical form. For comparison reasons with the same data the Factor procedure of Statistical package IBM SPSS 20 has been used. The numerical and graphical results presented with tables and graphs, demonstrate the agreement between the two methods. The most important result is the study of the relation between the 28 countries and the position of each country in groups or clouds, which are formed according to the values of the corresponding variables.

Keywords: Multiple Factorial Correspondence Analysis, Principal Component Analysis, Factor Analysis, E.U.-28 countries, Statistical package IBM SPSS 20, CHIC Analysis V 1.1 Software, Eurostat.eu Statistics

Procedia PDF Downloads 491
790 Analysis of the Significance of Multimedia Channels Using Sparse PCA and Regularized SVD

Authors: Kourosh Modarresi

Abstract:

The abundance of media channels and devices has given users a variety of options to extract, discover, and explore information in the digital world. Since, often, there is a long and complicated path that a typical user may venture before taking any (significant) action (such as purchasing goods and services), it is critical to know how each node (media channel) in the path of user has contributed to the final action. In this work, the significance of each media channel is computed using statistical analysis and machine learning techniques. More specifically, “Regularized Singular Value Decomposition”, and “Sparse Principal Component” has been used to compute the significance of each channel toward the final action. The results of this work are a considerable improvement compared to the present approaches.

Keywords: multimedia attribution, sparse principal component, regularization, singular value decomposition, feature significance, machine learning, linear systems, variable shrinkage

Procedia PDF Downloads 285
789 Development of an Experiment for Impedance Measurement of Structured Sandwich Sheet Metals by Using a Full Factorial Multi-Stage Approach

Authors: Florian Vincent Haase, Adrian Dierl, Anna Henke, Ralf Woll, Ennes Sarradj

Abstract:

Structured sheet metals and structured sandwich sheet metals are three-dimensional, lightweight structures with increased stiffness which are used in the automotive industry. The impedance, a figure of resistance of a structure to vibrations, will be determined regarding plain sheets, structured sheets, and structured sandwich sheets. The aim of this paper is generating an experimental design in order to minimize costs and duration of experiments. The design of experiments will be used to reduce the large number of single tests required for the determination of correlation between the impedance and its influencing factors. Full and fractional factorials are applied in order to systematize and plan the experiments. Their major advantages are high quality results given the relatively small number of trials and their ability to determine the most important influencing factors including their specific interactions. The developed full factorial experimental design for the study of plain sheets includes three factor levels. In contrast to the study of plain sheets, the respective impedance analysis used on structured sheets and structured sandwich sheets should be split into three phases. The first phase consists of preliminary tests which identify relevant factor levels. These factor levels are subsequently employed in main tests, which have the objective of identifying complex relationships between the parameters and the reference variable. Possible post-tests can follow up in case additional study of factor levels or other factors are necessary. By using full and fractional factorial experimental designs, the required number of tests is reduced by half. In the context of this paper, the benefits from the application of design for experiments are presented. Furthermore, a multistage approach is shown to take into account unrealizable factor combinations and minimize experiments.

Keywords: structured sheet metals, structured sandwich sheet metals, impedance measurement, design of experiment

Procedia PDF Downloads 353
788 Variable Tree Structure QR Decomposition-M Algorithm (QRD-M) in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Systems

Authors: Jae-Hyun Ro, Jong-Kwang Kim, Chang-Hee Kang, Hyoung-Kyu Song

Abstract:

In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, QR decomposition-M algorithm (QRD-M) has suboptimal error performance. However, the QRD-M has still high complexity due to many calculations at each layer in tree structure. To reduce the complexity of the QRD-M, proposed QRD-M modifies existing tree structure by eliminating unnecessary candidates at almost whole layers. The method of the elimination is discarding the candidates which have accumulated squared Euclidean distances larger than calculated threshold. The simulation results show that the proposed QRD-M has same bit error rate (BER) performance with lower complexity than the conventional QRD-M.

Keywords: complexity, MIMO-OFDM, QRD-M, squared Euclidean distance

Procedia PDF Downloads 312
787 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams

Procedia PDF Downloads 402
786 Hydrogen Production Through Thermocatalytic Decomposition of Methane Over Biochar

Authors: Seyed Mohamad Rasool Mirkarimi, David Chiaramonti, Samir Bensaid

Abstract:

Catalytic methane decomposition (CMD, reaction 4) is a one-step process for hydrogen production where carbon in the methane molecule is sequestered in the form of stable and higher-value carbon materials. Metallic catalysts and carbon-based catalysts are two major types of catalysts utilized for the CDM process. Although carbon-based catalysts have lower activity compared to metallic ones, they are less expensive and offer high thermal stability and strong resistance to chemical impurities such as sulfur. Also, it would require less costly separation methods as some of the carbon-based catalysts may not have an active metal component in them. Since the regeneration of metallic catalysts requires burning of the C on their surfaces, which emits CO/CO2, in some cases, using carbon-based catalysts would be recommended because regeneration can be completely avoided, and the catalyst can be directly used in other processes. This work focuses on the effect of biochar as a carbon-based catalyst for the conversion of methane into hydrogen and carbon. Biochar produced from the pyrolysis of poplar wood and activated biochar are used as catalysts for this process. In order to observe the impact of carbon-based catalysts on methane conversion, methane cracking in the absence and presence of catalysts for a gas stream with different levels of methane concentration should be performed. The results of these experiments prove conversion of methane in the absence of catalysts at 900 °C is negligible, whereas in the presence of biochar and activated biochar, significant growth has been observed. Comparing the results of the tests related to using char and activated char shows the enhancement obtained in BET surface area of the catalyst through activation leads to more than 10 vol.% methane conversion.

Keywords: hydrogen production, catalytic methane decomposition, biochar, activated biochar, carbon-based catalyts

Procedia PDF Downloads 60
785 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 318
784 Singular Value Decomposition Based Optimisation of Design Parameters of a Gearbox

Authors: Mehmet Bozca

Abstract:

Singular value decomposition based optimisation of geometric design parameters of a 5-speed gearbox is studied. During the optimisation, a four-degree-of freedom torsional vibration model of the pinion gear-wheel gear system is obtained and the minimum singular value of the transfer matrix is considered as the objective functions. The computational cost of the associated singular value problems is quite low for the objective function, because it is only necessary to compute the largest and smallest singular values (µmax and µmin) that can be achieved by using selective eigenvalue solvers; the other singular values are not needed. The design parameters are optimised under several constraints that include bending stress, contact stress and constant distance between gear centres. Thus, by optimising the geometric parameters of the gearbox such as, the module, number of teeth and face width it is possible to obtain a light-weight-gearbox structure. It is concluded that the all optimised geometric design parameters also satisfy all constraints.

Keywords: Singular value, optimisation, gearbox, torsional vibration

Procedia PDF Downloads 337
783 Numerical Study on Vortex-Driven Pressure Oscillation and Roll Torque Characteristics in a SRM with Two Inhibitors

Authors: Ji-Seok Hong, Hee-Jang Moon, Hong-Gye Sung

Abstract:

The details of flow structures and the coupling mechanism between vortex shedding and acoustic excitation in a solid rocket motor with two inhibitors have been investigated using 3D Large Eddy Simulation (LES) and Proper Orthogonal Decomposition (POD) analysis. The oscillation frequencies and vortex shedding periods from two inhibitors compare reasonably well with the experimental data and numerical result. A total of four different locations of the rear inhibitor has been numerically tested to characterize the coupling relation of vortex shedding frequency and acoustic mode. The major source of triggering pressure oscillation in the combustor is the resonance with the acoustic longitudinal half mode. It was observed that the counter-rotating vortices in the nozzle flow produce roll torque.

Keywords: large eddy simulation, proper orthogonal decomposition, SRM instability, flow-acoustic coupling

Procedia PDF Downloads 542
782 Testing the Change in Correlation Structure across Markets: High-Dimensional Data

Authors: Malay Bhattacharyya, Saparya Suresh

Abstract:

The Correlation Structure associated with a portfolio is subjected to vary across time. Studying the structural breaks in the time-dependent Correlation matrix associated with a collection had been a subject of interest for a better understanding of the market movements, portfolio selection, etc. The current paper proposes a methodology for testing the change in the time-dependent correlation structure of a portfolio in the high dimensional data using the techniques of generalized inverse, singular valued decomposition and multivariate distribution theory which has not been addressed so far. The asymptotic properties of the proposed test are derived. Also, the performance and the validity of the method is tested on a real data set. The proposed test performs well for detecting the change in the dependence of global markets in the context of high dimensional data.

Keywords: correlation structure, high dimensional data, multivariate distribution theory, singular valued decomposition

Procedia PDF Downloads 104
781 Response Surface Methodology to Obtain Disopyramide Phosphate Loaded Controlled Release Ethyl Cellulose Microspheres

Authors: Krutika K. Sawant, Anil Solanki

Abstract:

The present study deals with the preparation and optimization of ethyl cellulose-containing disopyramide phosphate loaded microspheres using solvent evaporation technique. A central composite design consisting of a two-level full factorial design superimposed on a star design was employed for optimizing the preparation microspheres. The drug:polymer ratio (X1) and speed of the stirrer (X2) were chosen as the independent variables. The cumulative release of the drug at a different time (2, 6, 10, 14, and 18 hr) was selected as the dependent variable. An optimum polynomial equation was generated for the prediction of the response variable at time 10 hr. Based on the results of multiple linear regression analysis and F statistics, it was concluded that sustained action can be obtained when X1 and X2 are kept at high levels. The X1X2 interaction was found to be statistically significant. The drug release pattern fitted the Higuchi model well. The data of a selected batch were subjected to an optimization study using Box-Behnken design, and an optimal formulation was fabricated. Good agreement was observed between the predicted and the observed dissolution profiles of the optimal formulation.

Keywords: disopyramide phosphate, ethyl cellulose, microspheres, controlled release, Box-Behnken design, factorial design

Procedia PDF Downloads 434
780 Teachers’ Protective Factors of Resilience Scale: Factorial Structure, Validity and Reliability Issues

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Recently developed scales addressed -specifically- teachers’ resilience. Although they profited from the field, they do not include some of the critical protective factors of teachers’ resilience identified in the literature. To address this limitation, we aimed at designing a more comprehensive scale for measuring teachers' resilience which encompasses various personal and environmental protective factors. To this end, two studies were carried out. In Study 1, 407 primary school teachers were tested with the new scale, the Teachers’ Protective Factors of Resilience Scale (TPFRS). Similar scales, such as the Multidimensional Teachers’ Resilience Scale and the Teachers’ Resilience Scale), were used to test the convergent validity, while the Maslach Burnout Inventory and the Teachers’ Sense of Efficacy Scale was used to assess the discriminant validity of the new scale. The factorial structure of the TPFRS was checked with confirmatory factor analysis and a good fit of the model to the data was found. Next, item response theory analysis using a two-parameter model (2PL) was applied to check the items within each factor. It revealed that 9 items did not fit the corresponding factors well and they were removed. The final version of the TPFRS includes 29 items, which assess six protective factors of teachers’ resilience: values and beliefs (5 items, α=.88), emotional and behavioral adequacy (6 items, α=.74), physical well-being (3 items, α=.68), relationships within the school environment, (6 items, α=.73) relationships outside the school environment (5 items, α=.84), and the legislative framework of education (4 items, α=.83). Results show that it presents a satisfactory convergent and discriminant validity. Study 2, in which 964 primary and secondary school teachers were tested, confirmed the factorial structure of the TPFRS as well as its discriminant validity, which was tested with the Schutte Emotional Intelligence Scale-Short Form. In conclusion, our results confirmed that the TPFRS is a valid instrument for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession. In conclusion, our results showed that the TPFRS is a new multi-dimensional instrument valid for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession.

Keywords: resilience, protective factors, teachers, item response theory

Procedia PDF Downloads 68
779 Subsurface Structures Related to the Hydrocarbon Migration and Accumulation in the Afghan Tajik Basin, Northern Afghanistan: Insights from Seismic Attribute Analysis

Authors: Samim Khair Mohammad, Takeshi Tsuji, Chanmaly Chhun

Abstract:

The Afghan Tajik (foreland) basin, located in the depression zone between mountain axes, is under compression and deformation during the collision of India with the Eurasian plate. The southern part of the Afghan Tajik basin in the Northern part of Afghanistan has not been well studied and explored, but considered for the significant potential for oil and gas resources. The Afghan Tajik basin depositional environments (< 8km) resulted from mixing terrestrial and marine systems, which has potential prospects of Jurrasic (deep) and Tertiary (shallow) petroleum systems. We used 2D regional seismic profiles with a total length of 674.8 km (or over an area of 2500 km²) in the southern part of the basin. To characterize hydrocarbon systems and structures in this study area, we applied advanced seismic attributes such as spectral decomposition (10 - 60Hz) based on time-frequency analysis with continuous wavelet transform. The spectral decomposition results yield the (averaging 20 - 30Hz group) spectral amplitude anomaly. Based on this anomaly result, seismic, and structural interpretation, the potential hydrocarbon accumulations were inferred around the main thrust folds in the tertiary (Paleogene+Neogene) petroleum systems, which appeared to be accumulated around the central study area. Furthermore, it seems that hydrocarbons dominantly migrated along the main thrusts and then concentrated around anticline fold systems which could be sealed by mudstone/carbonate rocks.

Keywords: The Afghan Tajik basin, seismic lines, spectral decomposition, thrust folds, hydrocarbon reservoirs

Procedia PDF Downloads 78
778 Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD

Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi

Abstract:

Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.

Keywords: FSDWT, key frame extraction, shot detection, singular value decomposition

Procedia PDF Downloads 372
777 Catalytic Degradation of Tetracycline in Aqueous Solution by Magnetic Ore Pyrite Nanoparticles

Authors: Allah Bakhsh Javid, Ali Mashayekh-Salehi, Fatemeh Davardoost

Abstract:

This study presents the preparation, characterization and catalytic activity of a novel natural mineral-based catalyst for destructive adsorption of tetracycline (TTC) as water emerging compounds. Degradation potential of raw and calcined magnetite catalyst was evaluated at different experiments situations such as pH, catalyst dose, reaction time and pollutant concentration. Calcined magnetite attained greater catalytic potential than the raw ore in the degradation of tetracycline, around 69% versus 3% at reaction time of 30 min and TTC aqueous solution of 50 mg/L, respectively. Complete removal of TTC could be obtained using 2 g/L calcined nanoparticles at reaction time of 60 min. The removal of TTC increased with the increase in solution temperature. Accordingly, considering its abundance in nature together with its very high catalytic potential, calcined pyrite is a promising and reliable catalytic material for destructive decomposition for catalytic decomposition and mineralization of such pharmaceutical compounds as TTC in water and wastewater.

Keywords: catalytic degradation, tetracycline, pyrite, emerging pollutants

Procedia PDF Downloads 161
776 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 109
775 Atom Probe Study of Early Stage of Precipitation on Binary Al-Li, Al-Cu Alloys and Ternary Al-Li-Cu Alloys

Authors: Muna Khushaim

Abstract:

Aluminum-based alloys play a key role in modern engineering, especially in the aerospace industry. Introduction of solute atoms such as Li and Cu is the main approach to improve the strength in age-hardenable Al alloys via the precipitation hardening phenomenon. Knowledge of the decomposition process of the microstructure during the precipitation reaction is particularly important for future technical developments. The objective of this study is to investigate the nano-scale chemical composition in the Al-Cu, Al-Li and Al-Li-Cu during the early stage of the precipitation sequence and to describe whether this compositional difference correlates with variations in the observed precipitation kinetics. Comparing the random binomial frequency distribution and the experimental frequency distribution of concentrations in atom probe tomography data was used to investigate the early stage of decomposition in the different binary and ternary alloys which were experienced different heat treatments. The results show that an Al-1.7 at.% Cu alloy requires a long ageing time of approximately 8 h at 160 °C to allow the diffusion of Cu atoms into Al matrix. For the Al-8.2 at.% Li alloy, a combination of both the natural ageing condition (48 h at room temperature) and a short artificial ageing condition (5 min at 160 °C) induces increasing on the number density of the Li clusters and hence increase number of precipitated δ' particles. Applying this combination of natural ageing and short artificial ageing conditions onto the ternary Al-4 at.% Li-1.7 at.% Cu alloy induces the formation of a Cu-rich phase. Increasing the Li content in the ternary alloy up to 8 at.% and increasing the ageing time to 30 min resulted in the precipitation processes ending with δ' particles. Thus, the results contribute to the understanding of Al-alloy design.

Keywords: aluminum alloy, atom probe tomography, early stage, decomposition

Procedia PDF Downloads 326
774 Dynamical Analysis of the Fractional-Order Mathematical Model of Hashimoto’s Thyroiditis

Authors: Neelam Singha

Abstract:

The present work intends to analyze the system dynamics of Hashimoto’s thyroiditis with the assistance of fractional calculus. Hashimoto’s thyroiditis or chronic lymphocytic thyroiditis is an autoimmune disorder in which the immune system attacks the thyroid gland, which gradually results in interrupting the normal thyroid operation. Consequently, the feedback control of the system gets disrupted due to thyroid follicle cell lysis. And, the patient perceives life-threatening clinical conditions like goiter, hyperactivity, euthyroidism, hyperthyroidism, etc. In this work, we aim to obtain the approximate solution to the posed fractional-order problem describing Hashimoto’s thyroiditis. We employ the Adomian decomposition method to solve the system of fractional-order differential equations, and the solutions obtained shall be useful to provide information about the effect of medical care. The numerical technique is executed in an organized manner to furnish the associated details of the progression of the disease and to visualize it graphically with suitable plots.

Keywords: adomian decomposition method, fractional derivatives, Hashimoto's thyroiditis, mathematical modeling

Procedia PDF Downloads 191
773 Localization of Pyrolysis and Burning of Ground Forest Fires

Authors: Pavel A. Strizhak, Geniy V. Kuznetsov, Ivan S. Voytkov, Dmitri V. Antonov

Abstract:

This paper presents the results of experiments carried out at a specialized test site for establishing macroscopic patterns of heat and mass transfer processes at localizing model combustion sources of ground forest fires with the use of barrier lines in the form of a wetted lay of material in front of the zone of flame burning and thermal decomposition. The experiments were performed using needles, leaves, twigs, and mixtures thereof. The dimensions of the model combustion source and the ranges of heat release correspond well to the real conditions of ground forest fires. The main attention is paid to the complex analysis of the effect of dispersion of water aerosol (concentration and size of droplets) used to form the barrier line. It is shown that effective conditions for localization and subsequent suppression of flame combustion and thermal decomposition of forest fuel can be achieved by creating a group of barrier lines with different wetting width and depth of the material. Relative indicators of the effectiveness of one and combined barrier lines were established, taking into account all the main characteristics of the processes of suppressing burning and thermal decomposition of forest combustible materials. We performed the prediction of the necessary and sufficient parameters of barrier lines (water volume, width, and depth of the wetted lay of the material, specific irrigation density) for combustion sources with different dimensions, corresponding to the real fire extinguishing practice.

Keywords: forest fire, barrier water lines, pyrolysis front, flame front

Procedia PDF Downloads 110
772 Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Authors: Samba Raju, Chiluveru, Manoj Tripathy

Abstract:

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Keywords: discrete wavelet transform, speech intelligibility, STOI, standard deviation

Procedia PDF Downloads 122