Search results for: LSDA+U approximation
97 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy
Authors: Giorgio Visentin, Alexei A. Buchachenko
Abstract:
Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer
Procedia PDF Downloads 15396 Design of Microwave Building Block by Using Numerical Search Algorithm
Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo
Abstract:
With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.
Procedia PDF Downloads 37995 Extreme Value Theory Applied in Reliability Analysis: Case Study of Diesel Generator Fans
Authors: Jelena Vucicevic
Abstract:
Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. In this paper, the results for the reliability of diesel generator fans were calculated through Extreme Value Theory. The Extreme Value Theory is not widely used in the engineering field. Its usage is well known in other areas such as hydrology, meteorology, finance. The significance of this theory is in the fact that unlike the other statistical methods it is focused on rare and extreme values, and not on average. It should be noted that this theory is not designed exclusively for extreme events, but for extreme values in any event. Therefore, this is a great opportunity to apply the theory and test if it could be applied in this situation. The significance of the work is the calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know the time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. The results achieved in this method will show the approximation of time for which the fans will work as they should, and the percentage of probability of fans working more than certain estimated time. Extreme Value Theory can be applied not only for rare and extreme events, but for any event that has values which we can consider as extreme.Keywords: extreme value theory, lifetime, reliability analysis, statistic, time to failure
Procedia PDF Downloads 32894 Numerical Calculation and Analysis of Fine Echo Characteristics of Underwater Hemispherical Cylindrical Shell
Authors: Hongjian Jia
Abstract:
A finite-length cylindrical shell with a spherical cap is a typical engineering approximation model of actual underwater targets. The research on the omni-directional acoustic scattering characteristics of this target model can provide a favorable basis for the detection and identification of actual underwater targets. The elastic resonance characteristics of the target are the results of the comprehensive effect of the target length, shell-thickness ratio and materials. Under the conditions of different materials and geometric dimensions, the coincidence resonance characteristics of the target have obvious differences. Aiming at this problem, this paper obtains the omni-directional acoustic scattering field of the underwater hemispherical cylindrical shell by numerical calculation and studies the influence of target geometric parameters (length, shell-thickness ratio) and material parameters on the coincidence resonance characteristics of the target in turn. The study found that the formant interval is not a stable value and changes with the incident angle. Among them, the formant interval is less affected by the target length and shell-thickness ratio and is significantly affected by the material properties, which is an effective feature for classifying and identifying targets of different materials. The quadratic polynomial is utilized to fully fit the change relationship between the formant interval and the angle. The results show that the three fitting coefficients of the stainless steel and aluminum targets are significantly different, which can be used as an effective feature parameter to characterize the target materials.Keywords: hemispherical cylindrical shell;, fine echo characteristics;, geometric and material parameters;, formant interval
Procedia PDF Downloads 10993 Factor Influencing Pharmacist Engagement and Turnover Intention in Thai Community Pharmacist: A Structural Equation Modelling Approach
Authors: T. Nakpun, T. Kanjanarach, T. Kittisopee
Abstract:
Turnover of community pharmacist can affect continuity of patient care and most importantly the quality of care and also the costs of a pharmacy. It was hypothesized that organizational resources, job characteristics, and social supports had direct effect on pharmacist turnover intention, and indirect effect on pharmacist turnover intention via pharmacist engagement. This research aimed to study influencing factors on pharmacist engagement and pharmacist turnover intention by testing the proposed structural hypothesized model to explain the relationship among organizational resources, job characteristics, and social supports that effect on pharmacist turnover intention and pharmacist engagement in Thai community pharmacists. A cross sectional study design with self-administered questionnaire was conducted in 209 Thai community pharmacists. Data were analyzed using Structural Equation Modeling technique with analysis of a moment structures AMOS program. The final model showed that only organizational resources had significant negative direct effect on pharmacist turnover intention (β =-0.45). Job characteristics and social supports had significant positive relationship with pharmacist engagement (β = 0.44, and 0.55 respectively). Pharmacist engagement had significant negative relationship with pharmacist turnover intention (β = - 0.24). Thus, job characteristics and social supports had significant negative indirect effect on turnover intention via pharmacist engagement (β =-0.11 and -0.13, respectively). The model fit the data well (χ2/ degree of freedom (DF) = 2.12, the goodness of fit index (GFI)=0.89, comparative fit index (CFI) = 0.94 and root mean square error of approximation (RMSEA) = 0.07). This study can be concluded that organizational resources were the most important factor because it had direct effect on pharmacist turnover intention. Job characteristics and social supports were also help decrease pharmacist turnover intention via pharmacist engagement.Keywords: community pharmacist, influencing factor, turnover intention, work engagement
Procedia PDF Downloads 20492 Confidence Intervals for Process Capability Indices for Autocorrelated Data
Authors: Jane A. Luke
Abstract:
Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes
Procedia PDF Downloads 38891 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 9790 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race
Authors: Joonas Pääkkönen
Abstract:
In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling
Procedia PDF Downloads 12489 Finite Deformation of a Dielectric Elastomeric Spherical Shell Based on a New Nonlinear Electroelastic Constitutive Theory
Authors: Odunayo Olawuyi Fadodun
Abstract:
Dielectric elastomers (DEs) are a type of intelligent materials with salient features like electromechanical coupling, lightweight, fast actuation speed, low cost and high energy density that make them good candidates for numerous engineering applications. This paper adopts a new nonlinear electroelastic constitutive theory to examine radial deformation of a pressurized thick-walled spherical shell of soft dielectric material with compliant electrodes on its inner and outer surfaces. A general formular for the internal pressure, which depends on the deformation and a potential difference between boundary electrodes or uniform surface charge distributions, is obtained in terms of special function. To illustrate the effects of an applied electric field on the mechanical behaviour of the shell, three different energy functions with distinct mechanical properties are employed for numerical purposes. The observed behaviour of the shells is preserved in the presence of an applied electric field, and the influence of the field due to a potential difference declines more slowly with the increasing deformation to that produced by a surface charge. Counterpart results are then presented for the thin-walled shell approximation as a limiting case of a thick-walled shell without restriction on the energy density. In the absence of internal pressure, it is obtained that inflation is caused by the application of an electric field. The resulting numerical solutions of the theory presented in this work are in agreement with those predicted by the generally adopted Dorfmann and Ogden model.Keywords: constitutive theory, elastic dielectric, electroelasticity, finite deformation, nonlinear response, spherical shell
Procedia PDF Downloads 9388 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc
Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,
Abstract:
Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc
Procedia PDF Downloads 10787 Advanced Analysis on Dissemination of Pollutant Caused by Flaring System Effect Using Computational Fluid Dynamics (CFD) Fluent Model with WRF Model Input in Transition Season
Authors: Benedictus Asriparusa
Abstract:
In the area of the oil industry, there is accompanied by associated natural gas. The thing shows that a large amount of energy is being wasted mostly in the developing countries by contributing to the global warming process. This research represents an overview of methods in Minas area employed by these researchers in PT. Chevron Pacific Indonesia to determine ways of measuring and reducing gas flaring and its emission drastically. It provides an approximation includes analytical studies, numerical studies, modeling, computer simulations, etc. Flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process will release emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the air and environment around the industrial area. Therefore, we need a simulation to create the pattern of the dissemination of pollutant. This research paper has being made to see trends in gas flaring model and current developments to predict dominant variable which gives impact to dissemination of pollutant. Fluent models used to simulate the distribution of pollutant gas coming out of the stack. While WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. This study condition focused on transition season in 2012 at Minas area. The goal of the simulation is looking for the exact time which is most influence towards dissemination of pollutants. The most influence factor divided into two main subjects. It is the quickest wind and the slowest wind. According to the simulation results, it can be seen that quickest wind moves to horizontal way and slowest wind moves to vertical way.Keywords: flaring system, fluent model, dissemination of pollutant, transition season
Procedia PDF Downloads 38086 Features of the Functional and Spatial Organization of Railway Hubs as a Part of the Urban Nodal Area
Authors: Khayrullina Yulia Sergeevna, Tokareva Goulsine Shavkatovna
Abstract:
The article analyzes the modern major railway hubs as a main part of the Urban Nodal Area (UNA). The term was introduced into the theory of urban planning at the end of the XX century. Tokareva G.S. jointly with Gutnov A.E. investigated the structure-forming elements of the city. UNA is the basic unit, the "cell" of the city structure. Specialization is depending on the position in the frame or the fabric of the city. This is related to feature of its organization. Spatial and functional features of UNA proposed to investigate in this paper. The base object for researching are railway hubs as connective nodes of inner and extern-city communications. Research used a stratified sampling type with the selection of typical objects. Research is being conducted on the 14 railway hubs of the native and foreign experience of the largest cities with a population over 1 million people located in one and close to the Russian climate zones. Features of the organization identified in the complex research of functional and spatial characteristics based on the hypothesis of the existence of dual characteristics of the organization of urban nodes. According to the analysis, there is using the approximation method that enable general conclusions of a representative selection of the entire population of railway hubs and it development’s area. Results of the research show specific ratio of functional and spatial organization of UNA based on railway hubs. Based on it there proposed typology of spaces and urban nodal areas. Identification of spatial diversity and functional organization’s features of the greatest railway hubs and it development’s area gives an indication of the different evolutionary stages of formation approaches. It help to identify new patterns for the complex and effective design as a prediction of the native hub’s development direction.Keywords: urban nodal area, railway hubs, features of structural, functional organization
Procedia PDF Downloads 38785 The Effect of Adhesion on the Frictional Hysteresis Loops at a Rough Interface
Authors: M. Bazrafshan, M. B. de Rooij, D. J. Schipper
Abstract:
Frictional hysteresis is the phenomenon in which mechanical contacts are subject to small (compared to contact area) oscillating tangential displacements. In the presence of adhesion at the interface, the contact repulsive force increases leading to a higher static friction force and pre-sliding displacement. This paper proposes a boundary element model (BEM) for the adhesive frictional hysteresis contact at the interface of two contacting bodies of arbitrary geometries. In this model, adhesion is represented by means of a Dugdale approximation of the total work of adhesion at local areas with a very small gap between the two bodies. The frictional contact is divided into sticking and slipping regions in order to take into account the transition from stick to slip (pre-sliding regime). In the pre-sliding regime, the stick and slip regions are defined based on the local values of shear stress and normal pressure. In the studied cases, a fixed normal force is applied to the interface and the friction force varies in such a way to start gross sliding in one direction reciprocally. For the first case, the problem is solved at the smooth interface between a ball and a flat for different values of work of adhesion. It is shown that as the work of adhesion increases, both static friction and pre-sliding distance increase due to the increase in the contact repulsive force. For the second case, the rough interface between a glass ball against a silicon wafer and a DLC (Diamond-Like Carbon) coating is considered. The work of adhesion is assumed to be identical for both interfaces. As adhesion depends on the interface roughness, the corresponding contact repulsive force is different for these interfaces. For the smoother interface, a larger contact repulsive force and consequently, a larger static friction force and pre-sliding distance are observed.Keywords: boundary element model, frictional hysteresis, adhesion, roughness, pre-sliding
Procedia PDF Downloads 16884 Analysis of Thermal Effect on Functionally Graded Micro-Beam via Mixed Finite Element Method
Authors: Cagri Mollamahmutoglu, Ali Mercan, Aykut Levent
Abstract:
Studies concerning the microstructures are becoming more important as the utilization of various micro-electro mechanical systems (MEMS) are increasing. Thus in recent years, thermal buckling and vibration analysis of microstructures have been subject to many investigations that are utilizing different numerical methods. In this study, thermal effects on mechanical response of a functionally graded (FG) Timoshenko micro-beam are presented in the framework of a mixed finite element formulation. Size effects are taken into consideration via modified couple stress theory. The mixed formulation is based on a function which in turn is derived via Gateaux Differential scientifically. After the resolution of all field equations of the beam, a potential operator is carefully constructed. Then this operator is used for the manufacturing of the functional. Usual procedures of finite element approximation are utilized for the derivation of the mixed finite element equations once the potential is obtained. Resulting finite element formulation allows usage of C₀ type simple linear shape functions and avoids shear-locking phenomena, which is a common shortcoming of the displacement-based formulations of moderately thick beams. The developed numerical scheme is used to obtain the effects of thermal loads on the static bending, free vibration and buckling of FG Timoshenko micro-beams for different power-law parameters, aspect ratios and boundary conditions. The versatility of the mixed formulation is presented over other numerical methods such as generalized differential quadrature method (GDQM). Another attractive property of the formulation is that it allows direct calculation of the contribution of micro effects on the overall mechanical response.Keywords: micro-beam, functionally graded materials, thermal effect, mixed finite element method
Procedia PDF Downloads 13983 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 33582 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 37581 Determination of Temperature Dependent Characteristic Material Properties of Commercial Thermoelectric Modules
Authors: Ahmet Koyuncu, Abdullah Berkan Erdogmus, Orkun Dogu, Sinan Uygur
Abstract:
Thermoelectric modules are integrated to electronic components to keep their temperature in specific values in electronic cooling applications. They can be used in different ambient temperatures. The cold side temperatures of thermoelectric modules depend on their hot side temperatures, operation currents, and heat loads. Performance curves of thermoelectric modules are given at most two different hot surface temperatures in product catalogs. Characteristic properties are required to select appropriate thermoelectric modules in thermal design phase of projects. Generally, manufacturers do not provide characteristic material property values of thermoelectric modules to customers for confidentiality. Common commercial software applied like ANSYS ICEPAK, FloEFD, etc., include thermoelectric modules in their libraries. Therefore, they can be easily used to predict the effect of thermoelectric usage in thermal design. Some software requires only the performance values in different temperatures. However, others like ICEPAK require three temperature-dependent equations for material properties (Seebeck coefficient (α), electrical resistivity (β), and thermal conductivity (γ)). Since the number and the variety of thermoelectric modules are limited in this software, definitions of characteristic material properties of thermoelectric modules could be required. In this manuscript, the method of derivation of characteristic material properties from the datasheet of thermoelectric modules is presented. Material characteristics were estimated from two different performance curves by experimentally and numerically in this study. Numerical calculations are accomplished in ICEPAK by using a thermoelectric module exists in the ICEPAK library. A new experimental setup was established to perform experimental study. Because of similar results of numerical and experimental studies, it can be said that proposed equations are approved. This approximation can be suggested for the analysis includes different type or brand of TEC modules.Keywords: electrical resistivity, material characteristics, thermal conductivity, thermoelectric coolers, seebeck coefficient
Procedia PDF Downloads 17980 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation
Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes
Abstract:
The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization
Procedia PDF Downloads 31579 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 31878 First Approximation to Congenital Anomalies in Kemp's Ridley Sea Turtle (Lepidochelys kempii) in Veracruz, Mexico
Authors: Judith Correa-Gomez, Cristina Garcia-De la Pena, Veronica Avila-Rodriguez, David R. Aguillon-Gutierrez
Abstract:
Kemp's ridley (Lepidochelys kempii) is the smallest species of sea turtle. It nests on the beaches of the Gulf of Mexico during summer. To date, there is no information about congenital anomalies in this species, which could be an important factor to be considered as a survival threat. The aim of this study was to determine congenital anomalies in dead embryos and hatchlings of Kemp's ridley sea turtle during 2020 nesting season. Fieldwork was conducted at the 'Campamento Tortugero Barra Norte', on the shores of Tuxpan, Veracruz, Mexico. A total of 95 nests were evaluated, from which 223 dead embryos and hatchlings were collected. Anomalies were detected by detailed physical examinations. Photographs of each anomaly were taken. From the 223 dead turtles, 213 (95%) showed a congenital anomaly. A total of 53 types of congenital anomalies were found: 22 types on the head region, 21 on the carapace region, 6 on the flipper region, and 4 regarding the entire body. The most prevalent anomaly in the head region was the presence of prefrontal supernumerary scales (42%, 93 occurrences). On the carapace region, the most common anomaly was the presence of supernumerary gular scales (59%, 131 occurrences). The two most common anomalies on the flipper region were amelia in fore flippers and rear bifurcation of flippers (0.9%, 2 occurrences each). The most common anomaly involving the entire body was hypomelanism (35%, 79 occurrences). These results agree with the recent studies on congenital malformations on sea turtles, being the head and the carapace regions the ones with the highest number of congenital anomalies. It is unknown whether the reported anomalies can be related to the death of these individuals. However, it is necessary to develop embryological studies in this species. To our best knowledge, this is the first worldwide report on Kemp’s ridley sea turtle anomalies.Keywords: Amelia, hypomelanism, morphology, supernumerary scales
Procedia PDF Downloads 16077 Normalized P-Laplacian: From Stochastic Game to Image Processing
Authors: Abderrahim Elmoataz
Abstract:
More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems
Procedia PDF Downloads 51276 Appraisal of the Impact Strength on Mild Steel Cladding Weld Metal Geometry
Authors: Chukwuemeka Daniel Ezeliora, Chukwuebuka Lawrence Ezeliora
Abstract:
The research focused on the appraisal of impact strength on mild steel cladding weld metal geometry. Over the years, poor welding has resulted in failures in engineering components, poor material quality, the collapse of welded materials, and failures in material strength. This is as a result of poor selection and combination of welding input process parameters. The application of the Tungsten Inert Gas (TIG) welding method with weld specimen of length 60; width 40, and thickness of 10 was used for the experiment. A butt joint method was prepared for the welding, and tungsten inert gas welding process was used to perform the twenty (20) experimental runs. A response surface methodology was used to model and to analyze the system. For an adequate polynomial approximation, the experimental design was used to collect the data. The key parameters considered in this work are welding current, gas flow rate, welding speed, and voltage. The range of the input process parameters was selected from the literature and the design. The steps followed to achieve the experimental design and results is the use of response surface method (RSM) implemented in central composite design (CCD) to generate the design matrix, to obtain quadratic model, and evaluate the interactions in the factors as well as optimizing the factors and the response. The result expresses that the best impact strength of the mild steel cladding weld metal geometry is 115.419 Joules. However, it was observed that the result of the input factors is; current 180.4 amp, voltage 23.99 volt, welding speed 142.7 mm.s and gas flow rate 10.8 lit/min as the optimum of the input process parameters. The optimal solution gives a guide for optimal impact strength of the weldment when welding with tungsten inert gas (TIG) under study.Keywords: mild steel, impact strength, response surface, bead geometry, welding
Procedia PDF Downloads 11975 Coupled Exciton - Surface Plasmon Polariton Enhanced Photoresponse of Two-Dimensional Hydrogenated Honeycomb Silicon Boride
Authors: Farzaneh Shayeganfar, Ali Ramazani
Abstract:
Exciton (strong electronic interaction of electron-hole) and hot carriers created by surface plasmon polaritons has been demonstrated in nanoscale optoelectronic devices, enhancing the photoresponse of the system. Herein, we employ a quantum framework to consider coupled exciton- hot carriers effects on photovoltaiv energy distribution, scattering process, polarizability and light emission of 2D-semicnductor. We use density functional theory (DFT) to design computationally a semi-functionalized 2D honeycomb silicon boride (SiB) monolayer with H atoms, suitable for photovoltaics. The dynamical stability, electronic and optical properties of SiB and semi-hydrogenated SiB structures were investigated utilizing the Tran-Blaha modified Becke-Johnson (TB-mBJ) potential. The calculated phonon dispersion shows that while an unhydrogenated SiB monolayer is dynamically unstable, surface semi-hydrogenation improves the stability of the structure and leads to a transition from metallic to semiconducting conductivity with a direct band gap of about 1.57 eV, appropriate for photovoltaic applications. The optical conductivity of this H-SiB structure, determined using the random phase approximation (RPA), shows that light adsorption should begin at the boundary of the visible range of light. Additionally, due to hydrogenation, the reflectivity spectrum declines sharply with respect to the unhydrogenated reflectivity spectrum in the IR and visible ranges of light. The energy band gap remains direct, increasing from 0.9 to 1.8 eV, upon increasing the strain from -6% (compressive) to +6% (tensile). Additionally, compressive and tensile strains lead, respectively, to red and blue shifts of optical the conductivity threshold around the visible range of light. Overall, this study suggests that H-SiB monolayers are suitable as two-dimensional solar cell materials.Keywords: surface plasmon, hot carrier, strain engineering, valley polariton
Procedia PDF Downloads 10974 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates
Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde
Abstract:
In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia
Procedia PDF Downloads 17573 Investigating the Motion of a Viscous Droplet in Natural Convection Using the Level Set Method
Authors: Isadora Bugarin, Taygoara F. de Oliveira
Abstract:
Binary fluids and emulsions, in general, are present in a vast range of industrial, medical, and scientific applications, showing complex behaviors responsible for defining the flow dynamics and the system operation. However, the literature describing those highlighted fluids in non-isothermal models is currently still limited. The present work brings a detailed investigation on droplet migration due to natural convection in square enclosure, aiming to clarify the effects of drop viscosity on the flow dynamics by showing how distinct viscosity ratios (droplet/ambient fluid) influence the drop motion and the final movement pattern kept on stationary regimes. The analysis was taken by observing distinct combinations of Rayleigh number, drop initial position, and viscosity ratios. The Navier-Stokes and Energy equations were solved considering the Boussinesq approximation in a laminar flow using the finite differences method combined with the Level Set method for binary flow solution. Previous results collected by the authors showed that the Rayleigh number and the drop initial position affect drastically the motion pattern of the droplet. For Ra ≥ 10⁴, two very marked behaviors were observed accordingly with the initial position: the drop can travel either a helical path towards the center or a cyclic circular path resulting in a closed cycle on the stationary regime. The variation of viscosity ratio showed a significant alteration of pattern, exposing a large influence on the droplet path, capable of modifying the flow’s behavior. Analyses on viscosity effects on the flow’s unsteady Nusselt number were also performed. Among the relevant contributions proposed in this work is the potential use of the flow initial conditions as a mechanism to control the droplet migration inside the enclosure.Keywords: binary fluids, droplet motion, level set method, natural convection, viscosity
Procedia PDF Downloads 11972 Preliminary Composite Overwrapped Pressure Vessel Design for Hydrogen Storage Using Netting Analysis and American Society of Mechanical Engineers Section X
Authors: Natasha Botha, Gary Corderely, Helen M. Inglis
Abstract:
With the move to cleaner energy applications the transport industry is working towards on-board hydrogen, or compressed natural gas-fuelled vehicles. A popular method for storage is to use composite overwrapped pressure vessels (COPV) because of their high strength to weight ratios. The proper design of these COPVs are according to international standards; this study aims to provide a preliminary design for a 350 Bar Type IV COPV (i.e. a polymer liner with a composite overwrap). Netting analysis, a popular analytical approach, is used as a first step to generate an initial design concept for the composite winding. This design is further improved upon by following the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel standards, Section X: Fibre-reinforced composite pressure vessels. A design program based on these two approaches is developed using Python. A numerical model of a burst test simulation is developed based on the two approaches and compared. The results indicate that the netting analysis provides a good preliminary design, while the ASME-based design is more robust and accurate as it includes a better approximation of the material behaviour. Netting analysis is an easy method to follow when considering an initial concept design for the composite winding when not all the material characteristics are known. Once these characteristics have been fully defined with experimental testing, an ASME-based design should always be followed to ensure that all designs conform to international standards and practices. Future work entails more detailed numerical testing of the design for improvement, this will include the boss design. Once finalised prototype manufacturing and experimental testing will be conducted, and the results used to improve on the COPV design.Keywords: composite overwrapped pressure vessel, netting analysis, design, American Society of Mechanical Engineers section x, fiber-reinforced, hydrogen storage
Procedia PDF Downloads 24771 Pollution Associated with Combustion in Stove to Firewood (Eucalyptus) and Pellet (Radiate Pine): Effect of UVA Irradiation
Authors: Y. Vásquez, F. Reyes, P. Oyola, M. Rubio, J. Muñoz, E. Lissi
Abstract:
In several cities in Chile, there is significant urban pollution, particularly in Santiago and in cities in the south where biomass is used as fuel in heating and cooking in a large proportion of homes. This has generated interest in knowing what factors can be modulated to control the level of pollution. In this project was conditioned and set up a photochemical chamber (14m3) equipped with gas monitors e.g. CO, NOX, O3, others and PM monitors e.g. dustrack, DMPS, Harvard impactors, etc. This volume could be exposed to UVA lamps, producing a spectrum similar to that generated by the sun. In this chamber, PM and gas emissions associated with biomass burning were studied in the presence and absence of radiation. From the comparative analysis of wood stove (eucalyptus globulus) and pellet (radiata pine), it can be concluded that, in the first approximation, 9-nitroanthracene, 4-nitropyrene, levoglucosan, water soluble potassium and CO present characteristics of the tracers. However, some of them show properties that interfere with this possibility. For example, levoglucosan is decomposed by radiation. The 9-nitroanthracene, 4-nitropyrene are emitted and formed under radiation. The 9-nitroanthracene has a vapor pressure that involves a partition involving the gas phase and particulate matter. From this analysis, it can be concluded that K+ is compound that meets the properties known to be tracer. The PM2.5 emission measured in the automatic pellet stove that was used in this thesis project was two orders of magnitude smaller than that registered by the manual wood stove. This has led to encouraging the use of pellet stoves in indoor heating, particularly in south-central Chile. However, it should be considered, while the use of pellet is not without problems, due to pellet stove generate high concentrations of Nitro-HAP's (secondary organic contaminants). In particular, 4-nitropyrene, compound of high toxicity, also primary and secondary particulate matter, associated with pellet burning produce a decrease in the size distribution of the PM, which leads to a depth penetration of the particles and their toxic components in the respiratory system.Keywords: biomass burning, photochemical chamber, particulate matter, tracers
Procedia PDF Downloads 19470 Effect of Out-Of-Plane Deformation on Relaxation Method of Stress Concentration in a Plate
Authors: Shingo Murakami, Shinichi Enoki
Abstract:
In structures, stress concentration is a factor of fatigue fracture. Basically, the stress concentration is a phenomenon that should be avoided. However, it is difficult to avoid the stress concentration. Therefore, relaxation of the stress concentration is important. The stress concentration arises from notches and circular holes. There is a relaxation method that a composite patch covers a notch and a circular hole. This relaxation method is used to repair aerial wings, but it is not systematized. Composites are more expensive than single materials. Accordingly, we propose the relaxation method that a single material patch covers a notch and a circular hole, and aim to systematize this relaxation method. We performed FEA (Finite Element Analysis) about an object by using a three-dimensional FEA model. The object was that a patch adheres to a plate with a circular hole. And, a uniaxial tensile load acts on the patched plate with a circular hole. In the three-dimensional FEA model, it is not easy to model the adhesion layer. Basically, the yield stress of the adhesive is smaller than that of adherents. Accordingly, the adhesion layer gets to plastic deformation earlier than the adherents under the yield stress of adherents. Therefore, we propose the three-dimensional FEA model which is applied a nonlinear elastic region to the adhesion layer. The nonlinear elastic region was calculated by a bilinear approximation. We compared the analysis results with the tensile test results to confirm whether the analysis model has usefulness. As a result, the analysis results agreed with the tensile test results. And, we confirmed that the analysis model has usefulness. As a result that the three-dimensional FEA model was used to the analysis, it was confirmed that an out-of-plane deformation occurred to the patched plate with a circular hole. The out-of-plane deformation causes stress increase of the patched plate with a circular hole. Therefore, we investigate that the out-of-plane deformation affects relaxation of the stress concentration in the plate with a circular hole on this relaxation method. As a result, it was confirmed that the out-of-plane deformation inhibits relaxation of the stress concentration on the plate with a circular hole.Keywords: stress concentration, patch, out-of-plane deformation, Finite Element Analysis
Procedia PDF Downloads 26669 Effect of Out-Of-Plane Deformation on Relaxation Method of Stress Concentration in a Plate with a Circular Hole
Authors: Shingo Murakami, Shinichi Enoki
Abstract:
In structures, stress concentration is a factor of fatigue fracture. Basically, the stress concentration is a phenomenon that should be avoided. However, it is difficult to avoid the stress concentration. Therefore, relaxation of the stress concentration is important. The stress concentration arises from notches and circular holes. There is a relaxation method that a composite patch covers a notch and a circular hole. This relaxation method is used to repair aerial wings, but it is not systematized. Composites are more expensive than single materials. Accordingly, we propose the relaxation method that a single material patch covers a notch and a circular hole, and aim to systematize this relaxation method. We performed FEA (Finite Element Analysis) about an object by using a three-dimensional FEA model. The object was that a patch adheres to a plate with a circular hole. And, a uniaxial tensile load acts on the patched plate with a circular hole. In the three-dimensional FEA model, it is not easy to model the adhesion layer. Basically, the yield stress of the adhesive is smaller than that of adherents. Accordingly, the adhesion layer gets to plastic deformation earlier than the adherents under the yield load of adherents. Therefore, we propose the three-dimensional FEA model which is applied a nonlinear elastic region to the adhesion layer. The nonlinear elastic region was calculated by a bilinear approximation. We compared the analysis results with the tensile test results to confirm whether the analysis model has usefulness. As a result, the analysis results agreed with the tensile test results. And, we confirmed that the analysis model has usefulness. As a result that the three-dimensional FEA model was used to the analysis, it was confirmed that an out-of-plane deformation occurred to the patched plate with a circular hole. The out-of-plane deformation causes stress increase of the patched plate with a circular hole. Therefore, we investigated that the out-of-plane deformation affects relaxation of the stress concentration in the plate with a circular hole on this relaxation method. As a result, it was confirmed that the out-of-plane deformation inhibits relaxation of the stress concentration on the plate with a circular hole.Keywords: stress concentration, patch, out-of-plane deformation, Finite Element Analysis
Procedia PDF Downloads 30168 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study
Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta
Abstract:
Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time
Procedia PDF Downloads 47