Search results for: linear decomposition methods
18192 Ultra-Fast pH-Gradient Ion Exchange Chromatography for the Separation of Monoclonal Antibody Charge Variants
Authors: Robert van Ling, Alexander Schwahn, Shanhua Lin, Ken Cook, Frank Steiner, Rowan Moore, Mauro de Pra
Abstract:
Purpose: Demonstration of fast high resolution charge variant analysis for monoclonal antibody (mAb) therapeutics within 5 minutes. Methods: Three commercially available mAbs were used for all experiments. The charge variants of therapeutic mAbs (Bevacizumab, Cetuximab, Infliximab, and Trastuzumab) are analyzed on a strong cation exchange column with a linear pH gradient separation method. The linear gradient from pH 5.6 to pH 10.2 is generated over time by running a linear pump gradient from 100% Thermo Scientific™ CX-1 pH Gradient Buffer A (pH 5.6) to 100% CX-1 pH Gradient Buffer B (pH 10.2), using the Thermo Scientific™ Vanquish™ UHPLC system. Results: The pH gradient method is generally applicable to monoclonal antibody charge variant analysis. In conjunction with state-of-the-art column and UHPLC technology, ultra fast high-resolution separations are consistently achieved in under 5 minutes for all mAbs analyzed. Conclusion: The linear pH gradient method is a platform method for mAb charge variant analysis. The linear pH gradient method can be easily optimized to improve separations and shorten cycle times. Ultra-fast charge variant separation is facilitated with UHPLC that complements, and in some instances outperforms CE approaches in terms of both resolution and throughput.Keywords: charge variants, ion exchange chromatography, monoclonal antibody, UHPLC
Procedia PDF Downloads 44018191 Linear Codes Afforded by the Permutation Representations of Finite Simple Groups and Their Support Designs
Authors: Amin Saeidi
Abstract:
Using a representation-theoretic approach and considering G to be a finite primitive permutation group of degree n, our aim is to determine linear codes of length n that admit G as a permutation automorphism group. We can show that in some cases, every binary linear code admitting G as a permutation automorphism group is a submodule of a permutation module defined by a primitive action of G. As an illustration of the method, we consider the sporadic simple group M₁₁ and the unitary group U(3,3). We also construct some point- and block-primitive 1-designs from the supports of some codewords of the codes in the discussion.Keywords: linear code, permutation representation, support design, simple group
Procedia PDF Downloads 7718190 Study on the DC Linear Stepper Motor to Industrial Applications
Authors: Nolvi Francisco Baggio Filho, Roniele Belusso
Abstract:
Many industrial processes require a precise linear motion. Usually, this movement is achieved with the use of rotary motors combined with electrical control systems and mechanical systems such as gears, pulleys and bearings. Other types of devices are based on linear motors, where the linear motion is obtained directly. The Linear Stepper Motor (MLP) is an excellent solution for industrial applications that require precise positioning and high speed. This study presents an MLP formed by a linear structure and static ferromagnetic material, and a mover structure in which three coils are mounted. Mechanical suspension systems allow a linear movement between static and mover parts, maintaining a constant air gap. The operating principle is based on the tendency of alignment of magnetic flux through the path of least reluctance. The force proportional to the intensity of the electric current and the speed proportional to the frequency of the excitation coils. The study of this device is still based on the use of a numerical and experimental analysis to verify the relationship among electric current applied and planar force developed. In addition, the magnetic field in the air gap region is also monitored.Keywords: linear stepper motor, planar traction force, reluctance magnetic, industry applications
Procedia PDF Downloads 50018189 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information
Authors: A. Preetha Priyadharshini, S. B. M. Priya
Abstract:
In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information
Procedia PDF Downloads 81318188 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models
Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand
Abstract:
Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models on two different realworld electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.Keywords: EHR, machine learning, imputation, laboratory variables, algorithmic bias
Procedia PDF Downloads 8518187 Handling Missing Data by Using Expectation-Maximization and Expectation-Maximization with Bootstrapping for Linear Functional Relationship Model
Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, A. H. M. R. Imon
Abstract:
Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in two types of LFRM namely the full model of LFRM and in LFRM when the slope is estimated using a nonparametric method. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators
Procedia PDF Downloads 45518186 Statistical Convergence for the Approximation of Linear Positive Operators
Authors: Neha Bhardwaj
Abstract:
In this paper, we consider positive linear operators and study the Voronovskaya type result of the operator then obtain an error estimate in terms of the higher order modulus of continuity of the function being approximated and its A-statistical convergence. Also, we compute the corresponding rate of A-statistical convergence for the linear positive operators.Keywords: Poisson distribution, Voronovskaya, modulus of continuity, a-statistical convergence
Procedia PDF Downloads 33318185 The Profit Trend of Cosmetics Products Using Bootstrap Edgeworth Approximation
Authors: Edlira Donefski, Lorenc Ekonomi, Tina Donefski
Abstract:
Edgeworth approximation is one of the most important statistical methods that has a considered contribution in the reduction of the sum of standard deviation of the independent variables’ coefficients in a Quantile Regression Model. This model estimates the conditional median or other quantiles. In this paper, we have applied approximating statistical methods in an economical problem. We have created and generated a quantile regression model to see how the profit gained is connected with the realized sales of the cosmetic products in a real data, taken from a local business. The Linear Regression of the generated profit and the realized sales was not free of autocorrelation and heteroscedasticity, so this is the reason that we have used this model instead of Linear Regression. Our aim is to analyze in more details the relation between the variables taken into study: the profit and the finalized sales and how to minimize the standard errors of the independent variable involved in this study, the level of realized sales. The statistical methods that we have applied in our work are Edgeworth Approximation for Independent and Identical distributed (IID) cases, Bootstrap version of the Model and the Edgeworth approximation for Bootstrap Quantile Regression Model. The graphics and the results that we have presented here identify the best approximating model of our study.Keywords: bootstrap, edgeworth approximation, IID, quantile
Procedia PDF Downloads 15918184 Thermal Decomposition of Ammonium Perchlorate in the Presence of Ferric Oxide and Graphene Oxide Nonmaterial’s
Authors: Mourad Makhlouf, Bouabdellah Benaicha, Zoubir Benmaamar, Didier Villemin
Abstract:
The addition of combustion catalysts to ammonium perchlorate-based composite fuels can indeed significantly enhance their performance. In this work, a nanocomposite was synthesized using graphene oxide (GO) and hematite nanoparticles grafted onto graphene oxide as a catalyst support.To characterize the nanocomposite, several experimental techniques were employed, including Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, and scanning electron microscopy (SEM). FTIR is useful for analyzing chemical bonding and functional groups, while Raman spectroscopy provides information about the vibrational modes of the materials. SEM allows for visualizing the surface morphology and structure.The thermal analysis of two mixtures, one based on AP/GO and the other on AP/GO-Fe2O3, was conducted with varying percentages. The results indicated that the nanocomposite GO-Fe2O3 acted as a catalyst, significantly accelerating the thermal decomposition process of AP. This catalytic effect ultimately led to an improvement in the energy performance of the composite fuel.Overall, the synthesis and characterization of the nanocomposite, as well as the thermal analysis, demonstrated the effectiveness of GO-Fe2O3 as a combustion catalyst in enhancing the performance of ammonium perchlorate-based composite fuels.Keywords: composite propellants, ammonium perchlorate, nanocomposite, catalytic support, hematite nanoparticles, graphene oxide, thermal decomposition
Procedia PDF Downloads 4818183 Analysis of Co2 Emission from Thailand's Thermal Power Sector by Divisia Decomposition Approach
Authors: Isara Muangthai, Lin Sue Jane
Abstract:
Electricity is vital to every country’s economy in the world. For Thailand, the electricity generation sector plays an important role in the economic system, and it is the largest source of CO2 emissions. The aim of this paper is to use the decomposition analysis to investigate the key factors contributing to the changes of CO2 emissions from the electricity sector. The decomposition analysis has been widely used to identify and assess the contributors to the changes in emission trends. Our study adopted the Divisia index decomposition to identify the key factors affecting the evolution of CO2 emissions from Thailand’s thermal power sector during 2000-2011. The change of CO2 emissions were decomposed into five factors, including: Emission coefficient, heat rate, fuel intensity, electricity intensity, and economic growth. Results have shown that CO2 emission in Thailand’s thermal power sector increased 29,173 thousand tons during 2000-2011. Economic growth was found to be the primary factor for increasing CO2 emissions, while the electricity intensity played a dominant role in decreasing CO2 emissions. The increasing effect of economic growth was up to 55,924 million tons of CO2 emissions because the growth and development of the economy relied on a large electricity supply. On the other hand, the shifting of fuel structure towards a lower-carbon content resulted in CO2 emission decline. Since the CO2 emissions released from Thailand’s electricity generation are rapidly increasing, the Thailand government will be required to implement a CO2 reduction plan in the future. In order to cope with the impact of CO2 emissions related to the power sector and to achieve sustainable development, this study suggests that Thailand’s government should focus on restructuring the fuel supply in power generation towards low carbon fuels by promoting the use of renewable energy for electricity, improving the efficiency of electricity use by reducing electricity transmission and the distribution of line losses, implementing energy conservation strategies by enhancing the purchase of energy-saving products, substituting the new power plant technology in the old power plants, promoting a shift of economic structure towards less energy-intensive services and orienting Thailand’s power industry towards low carbon electricity generation.Keywords: co2 emission, decomposition analysis, electricity generation, energy consumption
Procedia PDF Downloads 48218182 Construction of Finite Woven Frames through Bounded Linear Operators
Authors: A. Bhandari, S. Mukherjee
Abstract:
Two frames in a Hilbert space are called woven or weaving if all possible merge combinations between them generate frames of the Hilbert space with uniform frame bounds. Weaving frames are powerful tools in wireless sensor networks which require distributed data processing. Considering the practical applications, this article deals with finite woven frames. We provide methods of constructing finite woven frames, in particular, bounded linear operators are used to construct woven frames from a given frame. Several examples are discussed. We also introduce the notion of woven frame sequences and characterize them through the concepts of gaps and angles between spaces.Keywords: frames, woven frames, gap, angle
Procedia PDF Downloads 19318181 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles
Authors: Yihua Wang, Yunru Lai
Abstract:
Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring
Procedia PDF Downloads 46018180 Determination of the Axial-Vector from an Extended Linear Sigma Model
Authors: Tarek Sayed Taha Ali
Abstract:
The dependence of the axial-vector coupling constant gA on the quark masses has been investigated in the frame work of the extended linear sigma model. The field equations have been solved in the mean-field approximation. Our study shows a better fitting to the experimental data compared with the existing models.Keywords: extended linear sigma model, nucleon properties, axial coupling constant, physic
Procedia PDF Downloads 44518179 Torrefaction of Biomass Pellets: Modeling of the Process in a Fixed Bed Reactor
Authors: Ekaterina Artiukhina, Panagiotis Grammelis
Abstract:
Torrefaction of biomass pellets is considered as a useful pretreatment technology in order to convert them into a high quality solid biofuel that is more suitable for pyrolysis, gasification, combustion and co-firing applications. In the course of torrefaction the temperature varies across the pellet, and therefore chemical reactions proceed unevenly within the pellet. However, the uniformity of the thermal distribution along the pellet is generally assumed. The torrefaction process of a single cylindrical pellet is modeled here, accounting for heat transfer coupled with chemical kinetics. The drying sub-model was also introduced. The non-stationary process of wood pellet decomposition is described by the system of non-linear partial differential equations over the temperature and mass. The model captures well the main features of the experimental data.Keywords: torrefaction, biomass pellets, model, heat, mass transfer
Procedia PDF Downloads 48018178 Extracting Polyhydroxyalkanoates from Waste Sludge of Husbandry Industry Wastewater Treatment Plants
Authors: M. S. Lu, Y. P. Tsai, H. Shu, K. F. Chen, L. L. Lai
Abstract:
This study used sodium hypochlorite/sodium dodecyl sulfate method to successfully extract polyhydroxyalkanoates (PHA) from the wasted sludge of a husbandry industry wastewater treatment plant. We investigated the optimum operational conditions of three key factors with respect to effectively extract PHAs from husbandry industry wastewater sludge, including the sodium hypochlorite concentration, liquid-solid ratio, and reaction time. The experimental results showed the optimum operational conditions for polyhydroxyalkanoate recovery as follows: (1) being digested by the sodium hypochlorite/sodium dodecyl sulfate solution with 15% (v/v) of hypochlorite concentration, (2) being operated at the condition of 1.25 mLmg-1 of liquid-solid ratio, and (3) being reacted for more than 60 min. Under these conditions, the content of the recovered PHAs was about 53.2±0.66 mgPHAs/gVSS, and the purity of the recovered PHAs was about 78.5±6.91 wt%. The recovered PHAs were further used to produce biodegradable plastics for decomposition test buried in soils. The decomposition test showed 66.5% of the biodegradable plastics produced in the study remained after being buried in soils for 49 days. The cost for extracting PHAs is about 10.3 US$/kgPHAs and is lower than those produced by pure culture methods (12-15 US$/kgPHAs).Keywords: biodegradable plastic, biopolymers, polyhydroxyalkanoates (PHAs), waste sludge
Procedia PDF Downloads 34418177 Semigroups of Linear Transformations with Fixed Subspaces: Green’s Relations and Ideals
Authors: Yanisa Chaiya, Jintana Sanwong
Abstract:
Let V be a vector space over a field and W a subspace of V. Let Fix(V,W) denote the set of all linear transformations on V with fix all elements in W. In this paper, we show that Fix(V,W) is a semigroup under the composition of maps and describe Green’s relations on this semigroup in terms of images, kernels and the dimensions of subspaces of the quotient space V/W where V/W = {v+W : v is an element in V} with v+W = {v+w : w is an element in W}. Let dim(U) denote the dimension of a vector space U and Vα = {vα : v is an element in V} where vα is an image of v under a linear transformation α. For any cardinal number a let a'= min{b : b > a}. We also show that the ideals of Fix(V,W) are precisely the sets. Fix(r) ={α ∊ Fix(V,W) : dim(Vα/W) < r} where 1 ≤ r ≤ a' and a = dim(V/W). Moreover, we prove that if V is a finite-dimensional vector space, then every ideal of Fix(V,W) is principle.Keywords: Green’s relations, ideals, linear transformation semi-groups, principle ideals
Procedia PDF Downloads 29218176 Linear Prediction System in Measuring Glucose Level in Blood
Authors: Intan Maisarah Abd Rahim, Herlina Abdul Rahim, Rashidah Ghazali
Abstract:
Diabetes is a medical condition that can lead to various diseases such as stroke, heart disease, blindness and obesity. In clinical practice, the concern of the diabetic patients towards the blood glucose examination is rather alarming as some of the individual describing it as something painful with pinprick and pinch. As for some patient with high level of glucose level, pricking the fingers multiple times a day with the conventional glucose meter for close monitoring can be tiresome, time consuming and painful. With these concerns, several non-invasive techniques were used by researchers in measuring the glucose level in blood, including ultrasonic sensor implementation, multisensory systems, absorbance of transmittance, bio-impedance, voltage intensity, and thermography. This paper is discussing the application of the near-infrared (NIR) spectroscopy as a non-invasive method in measuring the glucose level and the implementation of the linear system identification model in predicting the output data for the NIR measurement. In this study, the wavelengths considered are at the 1450 nm and 1950 nm. Both of these wavelengths showed the most reliable information on the glucose presence in blood. Then, the linear Autoregressive Moving Average Exogenous model (ARMAX) model with both un-regularized and regularized methods was implemented in predicting the output result for the NIR measurement in order to investigate the practicality of the linear system in this study. However, the result showed only 50.11% accuracy obtained from the system which is far from the satisfying results that should be obtained.Keywords: diabetes, glucose level, linear, near-infrared, non-invasive, prediction system
Procedia PDF Downloads 15918175 Closed Form Exact Solution for Second Order Linear Differential Equations
Authors: Saeed Otarod
Abstract:
In a different simple and straight forward analysis a closed-form integral solution is found for nonhomogeneous second order linear ordinary differential equations, in terms of a particular solution of their corresponding homogeneous part. To find the particular solution of the homogeneous part, the equation is transformed into a simple Riccati equation from which the general solution of non-homogeneouecond order differential equation, in the form of a closed integral equation is inferred. The method works well in manyimportant cases, such as Schrödinger equation for hydrogen-like atoms. A non-homogenous second order linear differential equation has been solved as an extra exampleKeywords: explicit, linear, differential, closed form
Procedia PDF Downloads 6218174 Numerical Buckling of Composite Cylindrical Shells under Axial Compression Using Asymmetric Meshing Technique (AMT)
Authors: Zia R. Tahir, P. Mandal
Abstract:
This paper presents the details of a numerical study of buckling and post buckling behaviour of laminated carbon fiber reinforced plastic (CFRP) thin-walled cylindrical shell under axial compression using asymmetric meshing technique (AMT) by ABAQUS. AMT is considered to be a new perturbation method to introduce disturbance without changing geometry, boundary conditions or loading conditions. Asymmetric meshing affects both predicted buckling load and buckling mode shapes. Cylindrical shell having lay-up orientation [0°/+45°/-45°/0°] with radius to thickness ratio (R/t) equal to 265 and length to radius ratio (L/R) equal to 1.5 is analysed numerically. A series of numerical simulations (experiments) are carried out with symmetric and asymmetric meshing to study the effect of asymmetric meshing on predicted buckling behaviour. Asymmetric meshing technique is employed in both axial direction and circumferential direction separately using two different methods, first by changing the shell element size and varying the total number elements, and second by varying the shell element size and keeping total number of elements constant. The results of linear analysis (Eigenvalue analysis) and non-linear analysis (Riks analysis) using symmetric meshing agree well with analytical results. The results of numerical analysis are presented in form of non-dimensional load factor, which is the ratio of buckling load using asymmetric meshing technique to buckling load using symmetric meshing technique. Using AMT, load factor has about 2% variation for linear eigenvalue analysis and about 2% variation for non-linear Riks analysis. The behaviour of load end-shortening curve for pre-buckling is same for both symmetric and asymmetric meshing but for asymmetric meshing curve behaviour in post-buckling becomes extraordinarily complex. The major conclusions are: different methods of AMT have small influence on predicted buckling load and significant influence on load displacement curve behaviour in post buckling; AMT in axial direction and AMT in circumferential direction have different influence on buckling load and load displacement curve in post-buckling.Keywords: CFRP composite cylindrical shell, asymmetric meshing technique, primary buckling, secondary buckling, linear eigenvalue analysis, non-linear riks analysis
Procedia PDF Downloads 35318173 The Analysis of a Reactive Hydromagnetic Internal Heat Generating Poiseuille Fluid Flow through a Channel
Authors: Anthony R. Hassan, Jacob A. Gbadeyan
Abstract:
In this paper, the analysis of a reactive hydromagnetic Poiseuille fluid flow under each of sensitized, Arrhenius and bimolecular chemical kinetics through a channel in the presence of heat source is carried out. An exothermic reaction is assumed while the concentration of the material is neglected. Adomian Decomposition Method (ADM) together with Pade Approximation is used to obtain the solutions of the governing nonlinear non – dimensional differential equations. Effects of various physical parameters on the velocity and temperature fields of the fluid flow are investigated. The entropy generation analysis and the conditions for thermal criticality are also presented.Keywords: chemical kinetics, entropy generation, thermal criticality, adomian decomposition method (ADM) and pade approximation
Procedia PDF Downloads 46418172 Analysis of Nonlinear and Non-Stationary Signal to Extract the Features Using Hilbert Huang Transform
Authors: A. N. Paithane, D. S. Bormane, S. D. Shirbahadurkar
Abstract:
It has been seen that emotion recognition is an important research topic in the field of Human and computer interface. A novel technique for Feature Extraction (FE) has been presented here, further a new method has been used for human emotion recognition which is based on HHT method. This method is feasible for analyzing the nonlinear and non-stationary signals. Each signal has been decomposed into the IMF using the EMD. These functions are used to extract the features using fission and fusion process. The decomposition technique which we adopt is a new technique for adaptively decomposing signals. In this perspective, we have reported here potential usefulness of EMD based techniques.We evaluated the algorithm on Augsburg University Database; the manually annotated database.Keywords: intrinsic mode function (IMF), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), emotion detection, electrocardiogram (ECG)
Procedia PDF Downloads 58018171 Robust Variogram Fitting Using Non-Linear Rank-Based Estimators
Authors: Hazem M. Al-Mofleh, John E. Daniels, Joseph W. McKean
Abstract:
In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.Keywords: asymptotic relative efficiency, non-linear rank-based, rank estimates, variogram
Procedia PDF Downloads 43118170 Tempo-Spatial Pattern of Progress and Disparity in Child Health in Uttar Pradesh, India
Authors: Gudakesh Yadav
Abstract:
Uttar Pradesh is one of the poorest performing states of India in terms of child health. Using data from the three round of NFHS and two rounds of DLHS, this paper attempts to examine tempo-spatial change in child health and care practices in Uttar Pradesh and its regions. Rate-ratio, CI, multivariate, and decomposition analysis has been used for the study. Findings demonstrate that child health care practices have improved over the time in all regions of the state. However; western and southern region registered the lowest progress in child immunization. Nevertheless, there is no decline in prevalence of diarrhea and ARI over the period, and it remains critically high in the western and southern region. These regions also poorly performed in giving ORS, diarrhoea and ARI treatment. Public health services are least preferred for diarrhoea and ARI treatment. Results from decomposition analysis reveal that rural area, mother’s illiteracy and wealth contributed highest to the low utilization of the child health care practices consistently over the period of time. The study calls for targeted intervention for vulnerable children to accelerate child health care service utilization. Poor performing regions should be targeted and routinely monitored on poor child health indicators.Keywords: Acute Respiratory Infection (ARI), decomposition, diarrhea, inequality, immunization
Procedia PDF Downloads 30018169 A Morphological Thinking Approach for Conceptualising Product-Service Systems Solutions
Authors: Nicolas Haber
Abstract:
The study addresses the conceptual design of Product-Service Systems (PSSs) as a means of innovating solutions with the aim of reducing the environmental load of conventional product based solutions. Functional approaches targeting PSS solutions are developed in instinctive methods within the constraints of the setting in which they are conceived. Adopting morphological matrices in designing PSS concepts allows a thorough understanding of the settings, stakeholders, and functional requirements. Additionally, such a methodology is robust and adaptable to product-oriented, use-oriented and result-oriented systems. The research is based on a functional decomposition of the task in a similar way as in product design; while extended to include service components, providers, and receivers, while assessing the adaptability and homogeneity of the selected components and actors. A use-oriented concept is presented via a practical case study at an agricultural boom-sprayer manufacturer to demonstrate the effectiveness of the morphological approach to justify its viability. Additionally, a life cycle analysis is carried out in order to evaluate the environmental advantages inherited in a PSS solution versus a conventional solution. In light of the applications presented, the morphological approach appears to be a valid and generic tactic to conceiving integrated solutions whilst capturing the interrelations between the actors and elements of an integrated product-service system.Keywords: conceptual design, design for sustainability, functional decomposition, product-service systems
Procedia PDF Downloads 26418168 Generalized Additive Model for Estimating Propensity Score
Authors: Tahmidul Islam
Abstract:
Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching
Procedia PDF Downloads 36718167 A Discovery of the Dual Sequential Pattern of Prime Numbers in P x P: Applications in a Formal Proof of the Twin-Prime Conjecture
Authors: Yingxu Wang
Abstract:
This work presents basic research on the recursive structures and dual sequential patterns of primes for the formal proof of the Twin-Prime Conjecture (TPC). A rigorous methodology of Twin-Prime Decomposition (TPD) is developed in MATLAB to inductively verify potential twins in the dual sequences of primes. The key finding of this basic study confirms that the dual sequences of twin primes are not only symmetric but also infinitive in the unique base 6 cycle, except a limited subset of potential pairs is eliminated by the lack of dual primality. Both theory and experiments have formally proven that the infinity of twin primes stated in TPC holds in the P x P space.Keywords: number theory, primes, twin-prime conjecture, dual primes (P x P), twin prime decomposition, formal proof, algorithm
Procedia PDF Downloads 6418166 Carbon Sequestration and Carbon Stock Potential of Major Forest Types in the Foot Hills of Nilgiri Biosphere Reserve, India
Authors: B. Palanikumaran, N. Kanagaraj, M. Sangareswari, V. Sailaja, Kapil Sihag
Abstract:
The present study aimed to estimate the carbon sequestration potential of major forest types present in the foothills of Nilgiri biosphere reserve. The total biomass carbon stock was estimated in tropical thorn forest, tropical dry deciduous forest and tropical moist deciduous forest as 14.61 t C ha⁻¹ 75.16 t C ha⁻¹ and 187.52 t C ha⁻¹ respectively. The density and basal area were estimated in tropical thorn forest, tropical dry deciduous forest, tropical moist deciduous forest as 173 stems ha⁻¹, 349 stems ha⁻¹, 391 stems ha⁻¹ and 6.21 m² ha⁻¹, 31.09 m² ha⁻¹, 67.34 m² ha⁻¹ respectively. The soil carbon stock of different forest ecosystems was estimated, and the results revealed that tropical moist deciduous forest (71.74 t C ha⁻¹) accounted for more soil carbon stock when compared to tropical dry deciduous forest (31.80 t C ha⁻¹) and tropical thorn forest (3.99 t C ha⁻¹). The tropical moist deciduous forest has the maximum annual leaf litter which was 12.77 t ha⁻¹ year⁻¹ followed by 6.44 t ha⁻¹ year⁻¹ litter fall of tropical dry deciduous forest. The tropical thorn forest accounted for 3.42 t ha⁻¹ yr⁻¹ leaf litter production. The leaf litter carbon stock of tropical thorn forest, tropical dry deciduous forest and tropical moist deciduous forest found to be 1.02 t C ha⁻¹ yr⁻¹ 2.28 t⁻¹ C ha⁻¹ yr⁻¹ and 5.42 t C ha⁻¹ yr⁻¹ respectively. The results explained that decomposition percent at the soil surface in the following order.tropical dry deciduous forest (77.66 percent) > tropical thorn forest (69.49 percent) > tropical moist deciduous forest (63.17 percent). Decomposition percent at soil subsurface was studied, and the highest decomposition percent was observed in tropical dry deciduous forest (80.52 percent) followed by tropical moist deciduous forest (77.65 percent) and tropical thorn forest (72.10 percent). The decomposition percent was higher at soil subsurface. Among the three forest type, tropical moist deciduous forest accounted for the highest bacterial (59.67 x 105cfu’s g⁻¹ soil), actinomycetes (74.87 x 104cfu’s g⁻¹ soil) and fungal (112.60 x10³cfu’s g⁻¹ soil) population. The overall observation of the study helps to conclude that, the tropical moist deciduous forest has the potential of storing higher carbon content as biomass with the value of 264.68 t C ha⁻¹ and microbial populations.Keywords: basal area, carbon sequestration, carbon stock, Nilgiri biosphere reserve
Procedia PDF Downloads 16918165 A Study on the Coefficient of Transforming Relative Lateral Displacement under Linear Analysis of Structure to Its Real Relative Lateral Displacement
Authors: Abtin Farokhipanah
Abstract:
In recent years, analysis of structures is based on ductility design in contradictory to strength design in surveying earthquake effects on structures. ASCE07-10 code offers to intensify relative drifts calculated from a linear analysis with Cd which is called (Deflection Amplification Factor) to obtain the real relative drifts which can be calculated using nonlinear analysis. This lateral drift should be limited to the code boundaries. Calculation of this amplification factor for different structures, comparing with ASCE07-10 code and offering the best coefficient are the purposes of this research. Following our target, short and tall building steel structures with various earthquake resistant systems in linear and nonlinear analysis should be surveyed, so these questions will be answered: 1. Does the Response Modification Coefficient (R) have a meaningful relation to Deflection Amplification Factor? 2. Does structure height, seismic zone, response spectrum and similar parameters have an effect on the conversion coefficient of linear analysis to real drift of structure? The procedure has used to conduct this research includes: (a) Study on earthquake resistant systems, (b) Selection of systems and modeling, (c) Analyzing modeled systems using linear and nonlinear methods, (d) Calculating conversion coefficient for each system and (e) Comparing conversion coefficients with the code offered ones and concluding results.Keywords: ASCE07-10 code, deflection amplification factor, earthquake engineering, lateral displacement of structures, response modification coefficient
Procedia PDF Downloads 35418164 Parameterized Lyapunov Function Based Robust Diagonal Dominance Pre-Compensator Design for Linear Parameter Varying Model
Authors: Xiaobao Han, Huacong Li, Jia Li
Abstract:
For dynamic decoupling of linear parameter varying system, a robust dominance pre-compensator design method is given. The parameterized pre-compensator design problem is converted into optimal problem constrained with parameterized linear matrix inequalities (PLMI); To solve this problem, firstly, this optimization problem is equivalently transformed into a new form with elimination of coupling relationship between parameterized Lyapunov function (PLF) and pre-compensator. Then the problem was reduced to a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a newly constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator was achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation of a turbofan engine PLPV model.Keywords: linear parameter varying (LPV), parameterized Lyapunov function (PLF), linear matrix inequalities (LMI), diagonal dominance pre-compensator
Procedia PDF Downloads 39918163 Thermal Stability and Insulation of a Cement Mixture Using Graphene Oxide Nanosheets
Authors: Nasser A. M. Habib
Abstract:
The impressive physical properties of graphene derivatives, including thermal properties, have made them an attractive addition to advanced construction nanomaterial. In this study, we investigated the impact of incorporating low amounts of graphene oxide (GO) into cement mixture nanocomposites on their heat storage and thermal stability. The composites were analyzed using Fourier transmission infrared, thermo-gravimetric analysis, and field emission scanning electron microscopy. Results showed that GO significantly improved specific heat by 32%, reduced thermal conductivity by 16%, and reduced thermal decomposition to only 3% at a concentration of 1.2 wt%. These findings suggest that the cement mixture can withstand high temperatures and may suit specific applications requiring thermal stability and insulation properties.Keywords: cement mixture composite, graphene oxide, thermal decomposition, thermal conductivity
Procedia PDF Downloads 69