Search results for: least-squares polynomial approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 753

Search results for: least-squares polynomial approximation

123 Comparison of the Curvizigzag Incision with Transverse Stewart Incision in Women Undergoing Modified Radical Mastectomy for Carcinoma Breast

Authors: John Joseph S. Martis, Rohanchandra R. Gatty, Aaron Jose Fernandes, Rahul P. Nambiar

Abstract:

Introduction: Surgery for breast cancer is either mastectomy or breast conservation surgery. The most commonly used incision for modified radical mastectomy is the transverse Stewart incision. But this incision may have the disadvantage of causing disparity between the closure lines of superior and inferior skin flaps in mastectomy and can cause overhanging of soft tissue below and behind the axilla. The curvizigzag incision, on principle, may help in this regard and can prevent scar migration beyond the anterior axillary line. This study aims to compare the two incisions in this regard. Methods: 100 patients with cancer of breast were included in the study after satisfying inclusion and exclusion criteria. They underwent surgery at Father Muller Medical College, Mangalore, India, between November 2019 to September 2021. The patients were divided into two groups. Group A patients were subjected to modified radical mastectomy with curvizigzag incision and group B patients with transverse Stewart incision. Results: Seroma on postoperative day1, day 2 was 0% in both the groups. Seroma on postoperative day 30 was present in 14% of patients in group B. 60% of patients in group B had sag of soft tissue below and behind the axilla, and none of the patients in group A had this problem. In 64% of the patients in group B, the incision crossed the anterior axillary fold, 64% of the patients in group B had tension in the incision site while approximation of the skin flaps. Conclusion: Curvizigzag incision is statistically better with lesser complications when compared to the transverse Stewart incision for modified radical mastectomy for carcinoma breast.

Keywords: breast cancer, curvizigzag incision, transverse Stewart incision, seroma, modified radical mastectomy

Procedia PDF Downloads 95
122 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 88
121 Mg and MgN₃ Cluster in Diamond: Quantum Mechanical Studies

Authors: T. S. Almutairi, Paul May, Neil Allan

Abstract:

The geometrical, electronic and magnetic properties of the neutral Mg center and MgN₃ cluster in diamond have been studied theoretically in detail by means of an HSE06 Hamiltonian that includes a fraction of the exact exchange term; this is important for a satisfactory picture of the electronic states of open-shell systems. Another batch of the calculations by GGA functionals have also been included for comparison, and these support the results from HSE06. The local perturbations in the lattice by introduced Mg defect are restricted in the first and second shell of atoms before eliminated. The formation energy calculated with HSE06 and GGA of single Mg agrees with the previous result. We found the triplet state with C₃ᵥ is the ground state of Mg center with energy lower than the singlet with C₂ᵥ by ~ 0.1 eV. The recent experimental ZPL (557.4 nm) of Mg center in diamond has been discussed in the view of present work. The analysis of the band-structure of the MgN₃ cluster confirms that the MgN₃ defect introduces a shallow donor level in the gap lying within the conduction band edge. This observation is supported by the EMM that produces n-type levels shallower than the P donor level. The formation energy of MgN₂ calculated from a 2NV defect (~ 3.6 eV) is a promising value from which to engineer MgN₃ defects inside the diamond. Ion-implantation followed by heating to about 1200-1600°C might induce migration of N related defects to the localized Mg center. Temperature control is needed for this process to restore the damage and ensure the mobilities of V and N, which demands a more precise experimental study.

Keywords: empirical marker method, generalised gradient approximation, Heyd–Scuseria–Ernzerhof screened hybrid functional, zero phono line

Procedia PDF Downloads 115
120 From Equations to Structures: Linking Abstract Algebra and High-School Algebra for Secondary School Teachers

Authors: J. Shamash

Abstract:

The high-school curriculum in algebra deals mainly with the solution of different types of equations. However, modern algebra has a completely different viewpoint and is concerned with algebraic structures and operations. A question then arises: What might be the relevance and contribution of an abstract algebra course for developing expertise and mathematical perspective in secondary school mathematics instruction? This is the focus of this paper. The course Algebra: From Equations to Structures is a carefully designed abstract algebra course for Israeli secondary school mathematics teachers. The course provides an introduction to algebraic structures and modern abstract algebra, and links abstract algebra to the high-school curriculum in algebra. It follows the historical attempts of mathematicians to solve polynomial equations of higher degrees, attempts which resulted in the development of group theory and field theory by Galois and Abel. In other words, algebraic structures grew out of a need to solve certain problems, and proved to be a much more fruitful way of viewing them. This theorems in both group theory and field theory. Along the historical ‘journey’, many other major results in algebra in the past 150 years are introduced, and recent directions that current research in algebra is taking are highlighted. This course is part of a unique master’s program – the Rothschild-Weizmann Program – offered by the Weizmann Institute of Science, especially designed for practicing Israeli secondary school teachers. A major component of the program comprises mathematical studies tailored for the students at the program. The rationale and structure of the course Algebra: From Equations to Structures are described, and its relevance to teaching school algebra is examined by analyzing three kinds of data sources. The first are position papers written by the participating teachers regarding the relevance of advanced mathematics studies to expertise in classroom instruction. The second data source are didactic materials designed by the participating teachers in which they connected the mathematics learned in the mathematics courses to the school curriculum and teaching. The third date source are final projects carried out by the teachers based on material learned in the course.

Keywords: abstract algebra , linking abstract algebra and school mathematics, school algebra, secondary school mathematics, teacher professional development

Procedia PDF Downloads 146
119 Magnetohydrodynamic Flow of Viscoelastic Nanofluid and Heat Transfer over a Stretching Surface with Non-Uniform Heat Source/Sink and Non-Linear Radiation

Authors: Md. S. Ansari, S. S. Motsa

Abstract:

In this paper, an analysis has been made on the flow of non-Newtonian viscoelastic nanofluid over a linearly stretching sheet under the influence of uniform magnetic field. Heat transfer characteristics is analyzed taking into the effect of nonlinear radiation and non-uniform heat source/sink. Transport equations contain the simultaneous effects of Brownian motion and thermophoretic diffusion of nanoparticles. The relevant partial differential equations are non-dimensionalized and transformed into ordinary differential equations by using appropriate similarity transformations. The transformed, highly nonlinear, ordinary differential equations are solved by spectral local linearisation method. The numerical convergence, error and stability analysis of iteration schemes are presented. The effects of different controlling parameters, namely, radiation, space and temperature-dependent heat source/sink, Brownian motion, thermophoresis, viscoelastic, Lewis number and the magnetic force parameter on the flow field, heat transfer characteristics and nanoparticles concentration are examined. The present investigation has many industrial and engineering applications in the fields of coatings and suspensions, cooling of metallic plates, oils and grease, paper production, coal water or coal–oil slurries, heat exchangers’ technology, and materials’ processing and exploiting.

Keywords: magnetic field, nonlinear radiation, non-uniform heat source/sink, similar solution, spectral local linearisation method, Rosseland diffusion approximation

Procedia PDF Downloads 372
118 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy

Authors: Giorgio Visentin, Alexei A. Buchachenko

Abstract:

Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.

Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer

Procedia PDF Downloads 151
117 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 96
116 Design of Microwave Building Block by Using Numerical Search Algorithm

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.

Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.

Procedia PDF Downloads 379
115 Extreme Value Theory Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. In this paper, the results for the reliability of diesel generator fans were calculated through Extreme Value Theory. The Extreme Value Theory is not widely used in the engineering field. Its usage is well known in other areas such as hydrology, meteorology, finance. The significance of this theory is in the fact that unlike the other statistical methods it is focused on rare and extreme values, and not on average. It should be noted that this theory is not designed exclusively for extreme events, but for extreme values in any event. Therefore, this is a great opportunity to apply the theory and test if it could be applied in this situation. The significance of the work is the calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know the time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. The results achieved in this method will show the approximation of time for which the fans will work as they should, and the percentage of probability of fans working more than certain estimated time. Extreme Value Theory can be applied not only for rare and extreme events, but for any event that has values which we can consider as extreme.

Keywords: extreme value theory, lifetime, reliability analysis, statistic, time to failure

Procedia PDF Downloads 327
114 Factor Influencing Pharmacist Engagement and Turnover Intention in Thai Community Pharmacist: A Structural Equation Modelling Approach

Authors: T. Nakpun, T. Kanjanarach, T. Kittisopee

Abstract:

Turnover of community pharmacist can affect continuity of patient care and most importantly the quality of care and also the costs of a pharmacy. It was hypothesized that organizational resources, job characteristics, and social supports had direct effect on pharmacist turnover intention, and indirect effect on pharmacist turnover intention via pharmacist engagement. This research aimed to study influencing factors on pharmacist engagement and pharmacist turnover intention by testing the proposed structural hypothesized model to explain the relationship among organizational resources, job characteristics, and social supports that effect on pharmacist turnover intention and pharmacist engagement in Thai community pharmacists. A cross sectional study design with self-administered questionnaire was conducted in 209 Thai community pharmacists. Data were analyzed using Structural Equation Modeling technique with analysis of a moment structures AMOS program. The final model showed that only organizational resources had significant negative direct effect on pharmacist turnover intention (β =-0.45). Job characteristics and social supports had significant positive relationship with pharmacist engagement (β = 0.44, and 0.55 respectively). Pharmacist engagement had significant negative relationship with pharmacist turnover intention (β = - 0.24). Thus, job characteristics and social supports had significant negative indirect effect on turnover intention via pharmacist engagement (β =-0.11 and -0.13, respectively). The model fit the data well (χ2/ degree of freedom (DF) = 2.12, the goodness of fit index (GFI)=0.89, comparative fit index (CFI) = 0.94 and root mean square error of approximation (RMSEA) = 0.07). This study can be concluded that organizational resources were the most important factor because it had direct effect on pharmacist turnover intention. Job characteristics and social supports were also help decrease pharmacist turnover intention via pharmacist engagement.

Keywords: community pharmacist, influencing factor, turnover intention, work engagement

Procedia PDF Downloads 204
113 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 388
112 Modelling the Physicochemical Properties of Papaya Based-Cookies Using Response Surface Methodology

Authors: Mayowa Saheed Sanusi A, Musiliu Olushola Sunmonua, Abdulquadri Alakab Owolabi Raheema, Adeyemi Ikimot Adejokea

Abstract:

The development of healthy cookies for health-conscious consumers cannot be overemphasized in the present global health crisis. This study was aimed to evaluate and model the influence of ripeness levels of papaya puree (unripe, ripe and overripe), oven temperature (130°C, 150°C and 170°C) and oven rack speed (stationary, 10 and 20 rpm) on physicochemical properties of papaya-based cookies using Response Surface Methodology (RSM). The physicochemical properties (baking time, cookies mass, cookies thickness, spread ratio, proximate composition, Calcium, Vitamin C and Total Phenolic Content) were determined using standard procedures. The data obtained were statistically analysed at p≤0.05 using ANOVA. The polynomial regression model of response surface methodology was used to model the physicochemical properties. The adequacy of the models was determined using the coefficient of determination (R²) and the response optimizer of RSM was used to determine the optimum physicochemical properties for the papaya-based cookies. Cookies produced from overripe papaya puree were observed to have the shortest baking time; ripe papaya puree favors cookies spread ratio, while the unripe papaya puree gives cookies with the highest mass and thickness. The highest crude protein content, fiber content, calcium content, Vitamin C and Total Phenolic Content (TPC) were observed in papaya based-cookies produced from overripe puree. The models for baking time, cookies mass, cookies thickness, spread ratio, moisture content, crude protein and TPC were significant, with R2 ranging from 0.73 – 0.95. The optimum condition for producing papaya based-cookies with desirable physicochemical properties was obtained at 149°C oven temperature, 17 rpm oven rack speed and with the use of overripe papaya puree. The Information on the use of puree from unripe, ripe and overripe papaya can help to increase the use of underutilized unripe or overripe papaya and also serve as a strategic means of obtaining a fat substitute to produce new products with lower production cost and health benefit.

Keywords: papaya based-cookies, modeling, response surface methodology, physicochemical properties

Procedia PDF Downloads 167
111 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data

Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates

Abstract:

Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.

Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.

Procedia PDF Downloads 97
110 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 124
109 Finite Deformation of a Dielectric Elastomeric Spherical Shell Based on a New Nonlinear Electroelastic Constitutive Theory

Authors: Odunayo Olawuyi Fadodun

Abstract:

Dielectric elastomers (DEs) are a type of intelligent materials with salient features like electromechanical coupling, lightweight, fast actuation speed, low cost and high energy density that make them good candidates for numerous engineering applications. This paper adopts a new nonlinear electroelastic constitutive theory to examine radial deformation of a pressurized thick-walled spherical shell of soft dielectric material with compliant electrodes on its inner and outer surfaces. A general formular for the internal pressure, which depends on the deformation and a potential difference between boundary electrodes or uniform surface charge distributions, is obtained in terms of special function. To illustrate the effects of an applied electric field on the mechanical behaviour of the shell, three different energy functions with distinct mechanical properties are employed for numerical purposes. The observed behaviour of the shells is preserved in the presence of an applied electric field, and the influence of the field due to a potential difference declines more slowly with the increasing deformation to that produced by a surface charge. Counterpart results are then presented for the thin-walled shell approximation as a limiting case of a thick-walled shell without restriction on the energy density. In the absence of internal pressure, it is obtained that inflation is caused by the application of an electric field. The resulting numerical solutions of the theory presented in this work are in agreement with those predicted by the generally adopted Dorfmann and Ogden model.

Keywords: constitutive theory, elastic dielectric, electroelasticity, finite deformation, nonlinear response, spherical shell

Procedia PDF Downloads 93
108 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc

Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,

Abstract:

Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.

Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc

Procedia PDF Downloads 107
107 Simulation of Maximum Power Point Tracking in a Photovoltaic System: A Circumstance Using Pulse Width Modulation Analysis

Authors: Asowata Osamede

Abstract:

Optimized gain in respect to output power of stand-alone photovoltaic (PV) systems is one of the major focus of PV in recent times. This is evident to its low carbon emission and efficiency. Power failure or outage from commercial providers in general does not promote development to the public and private sector, these basically limit the development of industries. The need for a well-structured PV system is of importance for an efficient and cost-effective monitoring system. The purpose of this paper is to validate the maximum power point of an off-grid PV system taking into consideration the most effective tilt and orientation angles for PV's in the southern hemisphere. This paper is based on analyzing the system using a solar charger with MPPT from a pulse width modulation (PWM) perspective. The power conditioning device chosen is a solar charger with MPPT. The practical setup consists of a PV panel that is set to an orientation angle of 0o north, with a corresponding tilt angle of 36 o, 26o and 16o. The load employed in this set-up are three Lead Acid Batteries (LAB). The percentage fully charged, charging and not charging conditions are observed for all three batteries. The results obtained in this research is used to draw the conclusion that would provide a benchmark for researchers and scientist worldwide. This is done so as to have an idea of the best tilt and orientation angles for maximum power point in a basic off-grid PV system. A quantitative analysis would be employed in this research. Quantitative research tends to focus on measurement and proof. Inferential statistics are frequently used to generalize what is found about the study sample to the population as a whole. This would involve: selecting and defining the research question, deciding on a study type, deciding on the data collection tools, selecting the sample and its size, analyzing, interpreting and validating findings Preliminary results which include regression analysis (normal probability plot and residual plot using polynomial 6) showed the maximum power point in the system. The best tilt angle for maximum power point tracking proves that the 36o tilt angle provided the best average on time which in turns put the system into a pulse width modulation stage.

Keywords: power-conversion, meteonorm, PV panels, DC-DC converters

Procedia PDF Downloads 146
106 Advanced Analysis on Dissemination of Pollutant Caused by Flaring System Effect Using Computational Fluid Dynamics (CFD) Fluent Model with WRF Model Input in Transition Season

Authors: Benedictus Asriparusa

Abstract:

In the area of the oil industry, there is accompanied by associated natural gas. The thing shows that a large amount of energy is being wasted mostly in the developing countries by contributing to the global warming process. This research represents an overview of methods in Minas area employed by these researchers in PT. Chevron Pacific Indonesia to determine ways of measuring and reducing gas flaring and its emission drastically. It provides an approximation includes analytical studies, numerical studies, modeling, computer simulations, etc. Flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process will release emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the air and environment around the industrial area. Therefore, we need a simulation to create the pattern of the dissemination of pollutant. This research paper has being made to see trends in gas flaring model and current developments to predict dominant variable which gives impact to dissemination of pollutant. Fluent models used to simulate the distribution of pollutant gas coming out of the stack. While WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. This study condition focused on transition season in 2012 at Minas area. The goal of the simulation is looking for the exact time which is most influence towards dissemination of pollutants. The most influence factor divided into two main subjects. It is the quickest wind and the slowest wind. According to the simulation results, it can be seen that quickest wind moves to horizontal way and slowest wind moves to vertical way.

Keywords: flaring system, fluent model, dissemination of pollutant, transition season

Procedia PDF Downloads 380
105 Features of the Functional and Spatial Organization of Railway Hubs as a Part of the Urban Nodal Area

Authors: Khayrullina Yulia Sergeevna, Tokareva Goulsine Shavkatovna

Abstract:

The article analyzes the modern major railway hubs as a main part of the Urban Nodal Area (UNA). The term was introduced into the theory of urban planning at the end of the XX century. Tokareva G.S. jointly with Gutnov A.E. investigated the structure-forming elements of the city. UNA is the basic unit, the "cell" of the city structure. Specialization is depending on the position in the frame or the fabric of the city. This is related to feature of its organization. Spatial and functional features of UNA proposed to investigate in this paper. The base object for researching are railway hubs as connective nodes of inner and extern-city communications. Research used a stratified sampling type with the selection of typical objects. Research is being conducted on the 14 railway hubs of the native and foreign experience of the largest cities with a population over 1 million people located in one and close to the Russian climate zones. Features of the organization identified in the complex research of functional and spatial characteristics based on the hypothesis of the existence of dual characteristics of the organization of urban nodes. According to the analysis, there is using the approximation method that enable general conclusions of a representative selection of the entire population of railway hubs and it development’s area. Results of the research show specific ratio of functional and spatial organization of UNA based on railway hubs. Based on it there proposed typology of spaces and urban nodal areas. Identification of spatial diversity and functional organization’s features of the greatest railway hubs and it development’s area gives an indication of the different evolutionary stages of formation approaches. It help to identify new patterns for the complex and effective design as a prediction of the native hub’s development direction.

Keywords: urban nodal area, railway hubs, features of structural, functional organization

Procedia PDF Downloads 387
104 The Effect of Adhesion on the Frictional Hysteresis Loops at a Rough Interface

Authors: M. Bazrafshan, M. B. de Rooij, D. J. Schipper

Abstract:

Frictional hysteresis is the phenomenon in which mechanical contacts are subject to small (compared to contact area) oscillating tangential displacements. In the presence of adhesion at the interface, the contact repulsive force increases leading to a higher static friction force and pre-sliding displacement. This paper proposes a boundary element model (BEM) for the adhesive frictional hysteresis contact at the interface of two contacting bodies of arbitrary geometries. In this model, adhesion is represented by means of a Dugdale approximation of the total work of adhesion at local areas with a very small gap between the two bodies. The frictional contact is divided into sticking and slipping regions in order to take into account the transition from stick to slip (pre-sliding regime). In the pre-sliding regime, the stick and slip regions are defined based on the local values of shear stress and normal pressure. In the studied cases, a fixed normal force is applied to the interface and the friction force varies in such a way to start gross sliding in one direction reciprocally. For the first case, the problem is solved at the smooth interface between a ball and a flat for different values of work of adhesion. It is shown that as the work of adhesion increases, both static friction and pre-sliding distance increase due to the increase in the contact repulsive force. For the second case, the rough interface between a glass ball against a silicon wafer and a DLC (Diamond-Like Carbon) coating is considered. The work of adhesion is assumed to be identical for both interfaces. As adhesion depends on the interface roughness, the corresponding contact repulsive force is different for these interfaces. For the smoother interface, a larger contact repulsive force and consequently, a larger static friction force and pre-sliding distance are observed.

Keywords: boundary element model, frictional hysteresis, adhesion, roughness, pre-sliding

Procedia PDF Downloads 168
103 Analysis of Thermal Effect on Functionally Graded Micro-Beam via Mixed Finite Element Method

Authors: Cagri Mollamahmutoglu, Ali Mercan, Aykut Levent

Abstract:

Studies concerning the microstructures are becoming more important as the utilization of various micro-electro mechanical systems (MEMS) are increasing. Thus in recent years, thermal buckling and vibration analysis of microstructures have been subject to many investigations that are utilizing different numerical methods. In this study, thermal effects on mechanical response of a functionally graded (FG) Timoshenko micro-beam are presented in the framework of a mixed finite element formulation. Size effects are taken into consideration via modified couple stress theory. The mixed formulation is based on a function which in turn is derived via Gateaux Differential scientifically. After the resolution of all field equations of the beam, a potential operator is carefully constructed. Then this operator is used for the manufacturing of the functional. Usual procedures of finite element approximation are utilized for the derivation of the mixed finite element equations once the potential is obtained. Resulting finite element formulation allows usage of C₀ type simple linear shape functions and avoids shear-locking phenomena, which is a common shortcoming of the displacement-based formulations of moderately thick beams. The developed numerical scheme is used to obtain the effects of thermal loads on the static bending, free vibration and buckling of FG Timoshenko micro-beams for different power-law parameters, aspect ratios and boundary conditions. The versatility of the mixed formulation is presented over other numerical methods such as generalized differential quadrature method (GDQM). Another attractive property of the formulation is that it allows direct calculation of the contribution of micro effects on the overall mechanical response.

Keywords: micro-beam, functionally graded materials, thermal effect, mixed finite element method

Procedia PDF Downloads 138
102 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector

Procedia PDF Downloads 335
101 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 375
100 Determination of Temperature Dependent Characteristic Material Properties of Commercial Thermoelectric Modules

Authors: Ahmet Koyuncu, Abdullah Berkan Erdogmus, Orkun Dogu, Sinan Uygur

Abstract:

Thermoelectric modules are integrated to electronic components to keep their temperature in specific values in electronic cooling applications. They can be used in different ambient temperatures. The cold side temperatures of thermoelectric modules depend on their hot side temperatures, operation currents, and heat loads. Performance curves of thermoelectric modules are given at most two different hot surface temperatures in product catalogs. Characteristic properties are required to select appropriate thermoelectric modules in thermal design phase of projects. Generally, manufacturers do not provide characteristic material property values of thermoelectric modules to customers for confidentiality. Common commercial software applied like ANSYS ICEPAK, FloEFD, etc., include thermoelectric modules in their libraries. Therefore, they can be easily used to predict the effect of thermoelectric usage in thermal design. Some software requires only the performance values in different temperatures. However, others like ICEPAK require three temperature-dependent equations for material properties (Seebeck coefficient (α), electrical resistivity (β), and thermal conductivity (γ)). Since the number and the variety of thermoelectric modules are limited in this software, definitions of characteristic material properties of thermoelectric modules could be required. In this manuscript, the method of derivation of characteristic material properties from the datasheet of thermoelectric modules is presented. Material characteristics were estimated from two different performance curves by experimentally and numerically in this study. Numerical calculations are accomplished in ICEPAK by using a thermoelectric module exists in the ICEPAK library. A new experimental setup was established to perform experimental study. Because of similar results of numerical and experimental studies, it can be said that proposed equations are approved. This approximation can be suggested for the analysis includes different type or brand of TEC modules.

Keywords: electrical resistivity, material characteristics, thermal conductivity, thermoelectric coolers, seebeck coefficient

Procedia PDF Downloads 179
99 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation

Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes

Abstract:

The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.

Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization

Procedia PDF Downloads 315
98 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 363
97 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 129
96 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 317
95 First Approximation to Congenital Anomalies in Kemp's Ridley Sea Turtle (Lepidochelys kempii) in Veracruz, Mexico

Authors: Judith Correa-Gomez, Cristina Garcia-De la Pena, Veronica Avila-Rodriguez, David R. Aguillon-Gutierrez

Abstract:

Kemp's ridley (Lepidochelys kempii) is the smallest species of sea turtle. It nests on the beaches of the Gulf of Mexico during summer. To date, there is no information about congenital anomalies in this species, which could be an important factor to be considered as a survival threat. The aim of this study was to determine congenital anomalies in dead embryos and hatchlings of Kemp's ridley sea turtle during 2020 nesting season. Fieldwork was conducted at the 'Campamento Tortugero Barra Norte', on the shores of Tuxpan, Veracruz, Mexico. A total of 95 nests were evaluated, from which 223 dead embryos and hatchlings were collected. Anomalies were detected by detailed physical examinations. Photographs of each anomaly were taken. From the 223 dead turtles, 213 (95%) showed a congenital anomaly. A total of 53 types of congenital anomalies were found: 22 types on the head region, 21 on the carapace region, 6 on the flipper region, and 4 regarding the entire body. The most prevalent anomaly in the head region was the presence of prefrontal supernumerary scales (42%, 93 occurrences). On the carapace region, the most common anomaly was the presence of supernumerary gular scales (59%, 131 occurrences). The two most common anomalies on the flipper region were amelia in fore flippers and rear bifurcation of flippers (0.9%, 2 occurrences each). The most common anomaly involving the entire body was hypomelanism (35%, 79 occurrences). These results agree with the recent studies on congenital malformations on sea turtles, being the head and the carapace regions the ones with the highest number of congenital anomalies. It is unknown whether the reported anomalies can be related to the death of these individuals. However, it is necessary to develop embryological studies in this species. To our best knowledge, this is the first worldwide report on Kemp’s ridley sea turtle anomalies.

Keywords: Amelia, hypomelanism, morphology, supernumerary scales

Procedia PDF Downloads 160
94 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure

Authors: Kai Zhang, Xi Jiang

Abstract:

Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.

Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification

Procedia PDF Downloads 275