Search results for: three-parameter sine curve fitting
179 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution
Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud
Abstract:
In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch
Procedia PDF Downloads 421178 Inversion of PROSPECT+SAIL Model for Estimating Vegetation Parameters from Hyperspectral Measurements with Application to Drought-Induced Impacts Detection
Authors: Bagher Bayat, Wouter Verhoef, Behnaz Arabi, Christiaan Van der Tol
Abstract:
The aim of this study was to follow the canopy reflectance patterns in response to soil water deficit and to detect trends of changes in biophysical and biochemical parameters of grass (Poa pratensis species). We used visual interpretation, imaging spectroscopy and radiative transfer model inversion to monitor the gradual manifestation of water stress effects in a laboratory setting. Plots of 21 cm x 14.5 cm surface area with Poa pratensis plants that formed a closed canopy were subjected to water stress for 50 days. In a regular weekly schedule, canopy reflectance was measured. In addition, Leaf Area Index (LAI), Chlorophyll (a+b) content (Cab) and Leaf Water Content (Cw) were measured at regular time intervals. The 1-D bidirectional canopy reflectance model SAIL, coupled with the leaf optical properties model PROSPECT, was inverted using hyperspectral measurements by means of an iterative optimization method to retrieve vegetation biophysical and biochemical parameters. The relationships between retrieved LAI, Cab, Cw, and Cs (Senescent material) with soil moisture content were established in two separated groups; stress and non-stressed. To differentiate the water stress condition from the non-stressed condition, a threshold was defined that was based on the laboratory produced Soil Water Characteristic (SWC) curve. All parameters retrieved by model inversion using canopy spectral data showed good correlation with soil water content in the water stress condition. These parameters co-varied with soil moisture content under the stress condition (Chl: R2= 0.91, Cw: R2= 0.97, Cs: R2= 0.88 and LAI: R2=0.48) at the canopy level. To validate the results, the relationship between vegetation parameters that were measured in the laboratory and soil moisture content was established. The results were totally in agreement with the modeling outputs and confirmed the results produced by radiative transfer model inversion and spectroscopy. Since water stress changes all parts of the spectrum, we concluded that analysis of the reflectance spectrum in the VIS-NIR-MIR region is a promising tool for monitoring water stress impacts on vegetation.Keywords: hyperspectral remote sensing, model inversion, vegetation responses, water stress
Procedia PDF Downloads 225177 Adsorption of Congo Red from Aqueous Solution by Raw Clay: A Fixed Bed Column Study
Abstract:
The discharge of dye in industrial effluents is of great concern because their presence and accumulation have a toxic or carcinogenic effect on living species. The removals of such compounds at such low levels are a difficult problem. Physicochemical technique such as coagulation, flocculation, ozonation, reverse osmosis and adsorption on activated carbon, manganese oxide, silica gel and clay are among the methods employed. The adsorption process is an effective and attractive proposition for the treatment of dye contaminated wastewater. Activated carbon adsorption in fixed beds is a very common technology in the treatment of water and especially in processes of decolouration. However, it is expensive and the powdered one is difficult to be separated from aquatic system when it becomes exhausted or the effluent reaches the maximum allowable discharge level. The regeneration of exhausted activated carbon by chemical and thermal procedure is also expensive and results in loss of the sorbent. Dye molecules also have very high affinity for clay surfaces and are readily adsorbed when added to clay suspension. The elimination of the organic dye by clay was studied by serval researchers. The focus of this research was to evaluate the adsorption potential of the raw clay in removing congo red from aqueous solutions using a laboratory fixed-bed column. The continuous sorption process was conducted in this study in order to simulate industrial conditions. The effect of process parameters, such as inlet flow rate, adsorbent bed height and initial adsorbate concentration on the shape of breakthrough curves was investigated. A glass column with an internal diameter of 1.5 cm and height of 30 cm was used as a fixed-bed column. The pH of feed solution was set at 7.Experiments were carried out at different bed heights (5-20 cm), influent flow rates (1.6- 8 mL/min) and influent congo red concentrations (10-50 mg/L). The obtained results showed that the adsorption capacity increases with the bed depth and the initial concentration and it decreases at higher flow rate. The column regeneration was possible for four adsorption–desorption cycles. The clay column study states the value of the excellent adsorption capacity for the removal of congo red from aqueous solution. Uptake of congo red through a fixed-bed column was dependent on the bed depth, influent congo red concentration and flow rate.Keywords: adsorption, breakthrough curve, clay, congo red, fixed bed column, regeneration
Procedia PDF Downloads 334176 The Effects of Shift Work on Neurobehavioral Performance: A Meta Analysis
Authors: Thomas Vlasak, Tanja Dujlociv, Alfred Barth
Abstract:
Shift work is an essential element of modern labor, ensuring ideal conditions of service for today’s economy and society. Despite the beneficial properties, its impact on the neurobehavioral performance of exposed subjects remains controversial. This meta-analysis aims to provide first summarizing the effects regarding the association between shift work exposure and different cognitive functions. A literature search was performed via the databases PubMed, PsyINFO, PsyARTICLES, MedLine, PsycNET and Scopus including eligible studies until December 2020 that compared shift workers with non-shift workers regarding neurobehavioral performance tests. A random-effects model was carried out using Hedge’s g as a meta-analytical effect size with a restricted likelihood estimator to summarize the mean differences between the exposure group and controls. The heterogeneity of effect sizes was addressed by a sensitivity analysis using funnel plots, egger’s tests, p-curve analysis, meta-regressions, and subgroup analysis. The meta-analysis included 18 studies resulting in a total sample of 18,802 participants and 37 effect sizes concerning six different neurobehavioral outcomes. The results showed significantly worse performance in shift workers compared to non-shift workers in the following cognitive functions with g (95% CI): processing speed 0.16 (0.02 - 0.30), working memory 0.28 (0.51 - 0.50), psychomotor vigilance 0.21 (0.05 - 0.37), cognitive control 0.86 (0.45 - 1.27) and visual attention 0.19 (0.11 - 0.26). Neither significant moderating effects of publication year or study quality nor significant subgroup differences regarding type of shift or type of profession were indicated for the cognitive outcomes. These are the first meta-analytical findings that associate shift work with decreased cognitive performance in processing speed, working memory, psychomotor vigilance, cognitive control, and visual attention. Further studies should focus on a more homogenous measurement of cognitive functions, a precise assessment of experience of shift work and occupation types which are underrepresented in the current literature (e.g., law enforcement). In occupations where shift work is fundamental (e.g., healthcare, industries, law enforcement), protective countermeasures should be promoted for workers.Keywords: meta-analysis, neurobehavioral performance, occupational psychology, shift work
Procedia PDF Downloads 108175 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”
Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy
Abstract:
Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared togetherKeywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network
Procedia PDF Downloads 447174 Emergency Multidisciplinary Continuing Care Case Management
Authors: Mekroud Amel
Abstract:
Emergency departments are known for the workload, the variety of pathologies and the difficulties in their management with the continuous influx of patients The role of our service in the management of patients with two or three mild to moderate organ failures, involving several disciplines at the same time, as well as the effect of this management on the skills and efficiency of our team has been demonstrated Borderline cases between two or three or even more disciplines, with instability of a vital function, which have been successfully managed in the emergency room, the therapeutic procedures adopted, the consequences on the quality and level of care delivered by our team, as well as that the logistical consequences, and the pedagogical consequences are demonstrated. The consequences found are Positive on the emergency teams, in rare situations are negative Regarding clinical situations, it is the entanglement of hemodynamic distress with right, left or global participation, tamponade, low flow with acute pulmonary edema, and/or state of shock With respiratory distress with more or less profound hypoxemia, with haematosis disorder related to a bacterial or viral lung infection, pleurisy, pneumothorax, bronchoconstrictive crisis. With neurological disorders such as recent stroke, comatose state, or others With metabolic disorders such as hyperkalaemia renal insufficiency severe ionic disorders with accidents with anti vitamin K With or without septate effusion of one or more serous membranes with or without tamponade It’s a Retrospective, monocentric, descriptive study Period 05.01.2022 to 10.31.2022 the purpose of our work: Search for a statistically significant link between the type of moderate to severe pathology managed in the emergency room whose problems are multivisceral on the efficiency of the healthcare team and its level of care and optional care offered for patients Statistical Test used: Chi2 test to prove the significant link between the resolution of serious multidisciplinary cases in the emergency room and the effectiveness of the team in the management of complicated cases Search for a statistically significant link : The management of the most difficult clinical cases for organ specialties has given general practitioner emergency teams a great perspective and has been able to improve their efficiency in the face of emergencies receivedKeywords: emergency care teams, management of patients with dysfunction of more than one organ, learning curve, quality of care
Procedia PDF Downloads 80173 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 377172 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 242171 Early Prediction of Cognitive Impairment in Adults Aged 20 Years and Older using Machine Learning and Biomarkers of Heavy Metal Exposure
Authors: Ali Nabavi, Farimah Safari, Mohammad Kashkooli, Sara Sadat Nabavizadeh, Hossein Molavi Vardanjani
Abstract:
Cognitive impairment presents a significant and increasing health concern as populations age. Environmental risk factors such as heavy metal exposure are suspected contributors, but their specific roles remain incompletely understood. Machine learning offers a promising approach to integrate multi-factorial data and improve the prediction of cognitive outcomes. This study aimed to develop and validate machine learning models to predict early risk of cognitive impairment by incorporating demographic, clinical, and biomarker data, including measures of heavy metal exposure. A retrospective analysis was conducted using 2011-2014 National Health and Nutrition Examination Survey (NHANES) data. The dataset included participants aged 20 years and older who underwent cognitive testing. Variables encompassed demographic information, medical history, lifestyle factors, and biomarkers such as blood and urine levels of lead, cadmium, manganese, and other metals. Machine learning algorithms were trained on 90% of the data and evaluated on the remaining 10%, with performance assessed through metrics such as accuracy, area under curve (AUC), and sensitivity. Analysis included 2,933 participants. The stacking ensemble model demonstrated the highest predictive performance, achieving an AUC of 0.778 and a sensitivity of 0.879 on the test dataset. Key predictors included age, gender, hypertension, education level, urinary cadmium, and blood manganese levels. The findings indicate that machine learning can effectively predict the risk of cognitive impairment using a comprehensive set of clinical and environmental exposure data. Incorporating biomarkers of heavy metal exposure improved prediction accuracy and highlighted the role of environmental factors in cognitive decline. Further prospective studies are recommended to validate the models and assess their utility over time.Keywords: cognitive impairment, heavy metal exposure, predictive models, aging
Procedia PDF Downloads 2170 Durability Analysis of a Knuckle Arm Using VPG System
Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee
Abstract:
A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)
Procedia PDF Downloads 419169 Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town, Enrolled From 2011 to 2021
Authors: Getahun Nigusie
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: To assess incidence and predictors of mortality among HIV positive children on ART in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, Ethiopia
Procedia PDF Downloads 25168 Interaction between Trapezoidal Hill and Subsurface Cavity under SH Wave Incidence
Authors: Yuanrui Xu, Zailin Yang, Yunqiu Song, Guanxixi Jiang
Abstract:
It is an important subject of seismology on the influence of local topography on ground motion during earthquake. In mountainous areas with complex terrain, the construction of the tunnel is often the most effective transportation scheme. In these projects, the local terrain can be simplified into hills with different shapes, and the underground tunnel structure can be regarded as a subsurface cavity. The presence of the subsurface cavity affects the strength of the rock mass and changes the deformation and failure characteristics. Moreover, the scattering of the elastic waves by underground structures usually interacts with local terrains, which leads to a significant influence on the surface displacement of the terrains. Therefore, it is of great practical significance to study the surface displacement of local terrains with underground tunnels in earthquake engineering and seismology. In this work, the region is divided into three regions by the method of region matching. By using the fractional Bessel function and Hankel function, the complex function method, and the wave function expansion method, the wavefield expression of SH waves is introduced. With the help of a constitutive relation between the displacement and the stress components, the hoop stress and radial stress is obtained subsequently. Then, utilizing the continuous condition at different region boundaries, the undetermined coefficients in wave fields are solved by the Fourier series expansion and truncation of the finite term. Finally, the validity of the method is verified, and the surface displacement amplitude is calculated. The surface displacement amplitude curve is discussed in the numerical results. The results show that different parameters, such as radius and buried depth of the tunnel, wave number, and incident angle of the SH wave, have a significant influence on the amplitude of surface displacement. For the underground tunnel, the increase of buried depth will make the response of surface displacement amplitude increases at first and then decreases. However, the increase of radius leads the response of surface displacement amplitude to appear an opposite phenomenon. The increase of SH wave number can enlarge the amplitude of surface displacement, and the change of incident angle can obviously affect the amplitude fluctuation.Keywords: method of region matching, scattering of SH wave, subsurface cavity, trapezoidal hill
Procedia PDF Downloads 135167 Design of In-House Test Method for Assuring Packing Quality of Bottled Spirits
Authors: S. Ananthakrishnan, U. H. Acharya
Abstract:
Whether shopping in a retail location or via the internet, consumers expect to receive their products intact. When products arrive damaged or over-packaged, the result can be customer dissatisfaction and increased cost for retailers and manufacturers. The packaging performance depends on both the transport situation and the packaging design. During transportation, the packaged products are subjected to the variation in vibration levels from transport vehicles that vary in frequency and acceleration while moving to their destinations. Spirits manufactured by this Company were being transported to various parts of the country by road. There were instances of package breaking and customer complaints. The vibration experienced on a straight road at some speed may not be same as the vibration experienced by the same vehicle on a curve at the same speed. This vibration may negatively affect the product or packing. Hence, it was necessary to conduct a physical road test to understand the effect of vibration in the packaged products. The field transit trial has to be done before the transportations, which results in high investment. The company management was interested in developing an in-house test environment which would adequately represent the transit conditions. With the objective to develop an in-house test condition that can accurately simulate the mechanical loading scenario prevailing during the storage, handling and transportation of the products a brainstorming was done with the concerned people to identify the critical factors affecting vibration rate. Position of corrugated box, the position of bottle and speed of vehicle were identified as factors affecting the vibration rate. Several packing scenarios were identified by Design of Experiment methodology and simulated in the in-house test facility. Each condition was observed for 30 minutes, which was equivalent to 1000 km. The achieved vibration level was considered as the response. The average achieved in the simulated experiments was near to the third quartile (Q3) of the actual data. Thus, we were able to address around three-fourth of the actual phenomenon. Most of the cases in transit could be reproduced. The recommended test condition could generate a vibration level ranging from 9g to 15g as against a maximum of only 7g that was being generated earlier. Thus, the Company was able to test the packaged cartons satisfactorily in the house itself before transporting to the destinations, assuring itself that the breakages of the bottles will not happen.Keywords: ANOVA, Corrugated box, DOE, Quartile
Procedia PDF Downloads 125166 Study on Improvement the Performance of Construction Project Using Lean Principles
Authors: Sumaya Adina
Abstract:
The productivity of the construction industry has faced numerous challenges, rising costs, and scarce resources over the past forty years; therefore, one approach for improving and enhancing the framework is the use of lean techniques. Lean method outcomes from the use of a brand-form of manufacturing control in production. At a time when sustainability and efficiency are essential, lean offers a clear path to make the construction industry fit for the future. An excessive number of construction professionals and experts have efficiently optimised development initiatives using lean construction (LC) techniques to reduce waste, maximise value creation, and focus on the process that creates real added value and continuous improvement, strengthening flexibility and adaptability. The present research has been undertaken to study the improvement in the performance of construction projects using lean principles. The study work is divided into three stages. Initially, a questionnaire survey was conducted on visual management techniques to improve the performance of the construction projects. The questionnaire was distributed to civil engineers, architects, site managers, project managers, and full-time executives, with nearly 100 questionnaires shared with respondents. A total of 83 responses were received to determine the reliability of the data, and analysis was done using SPSS software. In the second stage, the impact of value stream mapping on the real-time project is determined and its performance in the form of time and cost reduction is evaluated. The case study examines a bunker-building project located in Kabul Afghanistan; the project was planned conventionally without considering the lean concepts. To reduce overall kinds of waste in the project, a plan was developed using the Vico Control software to visualize the value stream of the project. Finally, the impact of value stream mapping on the project's total cash flow is evaluated and compared by plotting the total cash flow curve using Vico software. As a result, labour costs were reduced by 33%. The duration of the project was reduced by 17% reducing the duration of the project also improved the cash flow of the entire project by 14% and increased the cash flow from negative 67% to negative 44%.Keywords: lean construction, cost and time overrun, value stream mapping, construction effeciency
Procedia PDF Downloads 9165 Coffee Consumption Has No Acute Effects on Glucose Metabolism in Healthy Men: A Randomized Crossover Clinical Trial
Authors: Caio E. G. Reis, Sara Wassell, Adriana L. Porto, Angélica A. Amato, Leslie J. C. Bluck, Teresa H. M. da Costa
Abstract:
Background: Multiple epidemiologic studies have consistently reported association between increased coffee consumption and a lowered risk of Type 2 Diabetes Mellitus. However, the mechanisms behind this finding have not been fully elucidated. Objective: We investigate the effect of coffee (caffeinated and decaffeinated) on glucose effectiveness and insulin sensitivity using the stable isotope minimal model protocol with oral glucose administration in healthy men. Design: Fifteen healthy men underwent 5 arms randomized crossover single-blinding (researchers) clinical trial. They consumed decaffeinated coffee, caffeinated coffee (with and without sugar), and controls – water (with and without sugar) followed 1 hour by an oral glucose tolerance test (75 g of available carbohydrate) with intravenous labeled dosing interpreted by the two compartment minimal model (225 minutes). One-way ANOVA with Bonferroni adjustment were used to compare the effects of the tested beverages on glucose metabolism parameters. Results: Decaffeinated coffee resulted in 29% and 85% higher insulin sensitivity compared with caffeinated coffee and water, respectively, and the caffeinated coffee showed 15% and 60% higher glucose effectiveness compared with decaffeinated coffee and water, respectively. However, these differences were not significant (p > 0.10). In overall analyze (0 – 225 min) there were no significant differences on glucose effectiveness, insulin sensitivity, and glucose and insulin area under the curve between the groups. The beneficial effects of coffee did not seem to act in the short-term (hours) on glucose metabolism parameters mainly on insulin sensitivity indices. The benefits of coffee consumption occur in the long-term (years) as has been shown in the reduction of Type 2 Diabetes Mellitus risk in epidemiological studies. The clinical relevance of the present findings is that there is no need to avoid coffee as the drink choice for healthy people. Conclusions: The findings of this study demonstrate that the consumption of caffeinated and decaffeinated coffee with or without sugar has no acute effects on glucose metabolism in healthy men. Further researches, including long-term interventional studies, are needed to fully elucidate the mechanisms behind the coffee effects on reduced risk for Type 2 Diabetes Mellitus.Keywords: coffee, diabetes mellitus type 2, glucose, insulin
Procedia PDF Downloads 438164 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates
Authors: Jennifer Buz, Alvin Spivey
Abstract:
The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation
Procedia PDF Downloads 131163 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models
Authors: Lucille Alonso, Florent Renard
Abstract:
The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island
Procedia PDF Downloads 138162 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications
Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo
Abstract:
Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer
Procedia PDF Downloads 29161 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 43160 Leaf Epidermal Micromorphology as Identification Features in Accessions of Sesamum indicum L. Collected from Northern Nigeria
Authors: S. D. Abdul, F. B. J. Sawa, D. Z. Andrawus, G. Dan'ilu
Abstract:
Fresh leaves of twelve accessions of S. indicum were studied to examine their stomatal features, trichomes, epidermal cell shapes and anticlinal cell-wall patterns which may be used for the delimitation of the varieties. The twelve accessions of S. indicum studied have amphistomatic leaves, i.e. having stomata on both surfaces. Four types of stomatal complex types were observed namely, diacytic, anisocytic, tetracytic and anomocytic. Anisocytic type was the most common occurring on both surfaces of all the varieties and occurred 100% in varieties lale-duk, ex-sudan and ex-gombe 6. One-way ANOVA revealed that there was no significant difference between the stomatal densities of ex-gombe 6, ex-sudan, adawa-wula, adawa-ting, ex-gombe 4 and ex-gombe 2 . Accession adawa-ting (improved) has the smallest stomatal size (26.39µm) with highest stomatal density (79.08mm2) while variety adawa-wula possessed the largest stomatal size (74.31µm) with lowest stomatal density (29.60mm2), the exception was found in variety adawa-ting whose stomatal size is larger (64.03µm) but with higher stomatal density (71.54mm2). Wavy, curve or undulate anticlinal wall patterns with irregular and or isodiametric epidermal cell shapes were observed. These accessions were found to exhibit high degree of heterogeneity in their trichome features. Ten types of trichomes were observed: unicellular, glandular peltate, capitate glandular, long unbranched uniseriate, short unbranched uniseriate, scale, multicellular, multiseriate capitate glandular, branched uniseriate and stallate trichomes. The most frequent trichome type is short-unbranched uniseriate, followed by long-unbranched uniseriate (72.73% and 72.5%) respectively. The least frequent was multiseriate capitate glandular (11.5%). The high variation in trichome types and density coupled with the stomatal complex types suggest that these varieties of S. indicum probably have the capacity to conserve water. Furthermore, the leaf micromorphological features varied from one accession to another, hence, are found to be good diagnostic and additional tool in identification as well as nomenclature of the accessions of S. indicum.Keywords: Sesamum indicum, stomata, trichomes, epidermal cells, taxonomy
Procedia PDF Downloads 277159 Thermally Stable Crystalline Triazine-Based Organic Polymeric Nanodendrites for Mercury(2+) Ion Sensing
Authors: Dimitra Das, Anuradha Mitra, Kalyan Kumar Chattopadhyay
Abstract:
Organic polymers, constructed from light elements like carbon, hydrogen, nitrogen, oxygen, sulphur, and boron atoms, are the emergent class of non-toxic, metal-free, environmental benign advanced materials. Covalent triazine-based polymers with a functional triazine group are significant class of organic materials due to their remarkable stability arising out of strong covalent bonds. They can conventionally form hydrogen bonds, favour π–π contacts, and they were recently revealed to be involved in interesting anion–π interactions. The present work mainly focuses upon the development of a single-crystalline, highly cross-linked triazine-based nitrogen-rich organic polymer with nanodendritic morphology and significant thermal stability. The polymer has been synthesized through hydrothermal treatment of melamine and ethylene glycol resulting in cross-polymerization via condensation-polymerization reaction. The crystal structure of the polymer has been evaluated by employing Rietveld whole profile fitting method. The polymer has been found to be composed of monoclinic melamine having space group P21/a. A detailed insight into the chemical structure of the as synthesized polymer has been elucidated by Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopic analysis. X-Ray Photoelectron Spectroscopic (XPS) analysis has also been carried out for further understanding of the different types of linkages required to create the backbone of the polymer. The unique rod-like morphology of the triazine based polymer has been revealed from the images obtained from Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM). Interestingly, this polymer has been found to selectively detect mercury (Hg²⁺) ions at an extremely low concentration through fluorescent quenching with detection limit as low as 0.03 ppb. The high toxicity of mercury ions (Hg²⁺) arise from its strong affinity towards the sulphur atoms of biological building blocks. Even a trace quantity of this metal is dangerous for human health. Furthermore, owing to its small ionic radius and high solvation energy, Hg²⁺ ions remain encapsulated by water molecules making its detection a challenging task. There are some existing reports on fluorescent-based heavy metal ion sensors using covalent organic frameworks (COFs) but reports on mercury sensing using triazine based polymers are rather undeveloped. Thus, the importance of ultra-trace detection of Hg²⁺ ions with high level of selectivity and sensitivity has contemporary significance. A plausible sensing phenomenon by the polymer has been proposed to understand the applicability of the material as a potential sensor. The impressive sensitivity of the polymer sample towards Hg²⁺ is the very first report in the field of highly crystalline triazine based polymers (without the introduction of any sulphur groups or functionalization) towards mercury ion detection through photoluminescence quenching technique. This crystalline metal-free organic polymer being cheap, non-toxic and scalable has current relevance and could be a promising candidate for Hg²⁺ ion sensing at commercial level.Keywords: fluorescence quenching , mercury ion sensing, single-crystalline, triazine-based polymer
Procedia PDF Downloads 137158 Assessment of Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town Who Were Enrolled From 2011 to 2021
Authors: Getahun Nigusie Demise
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: The aim of this study is to assess the incidence and predictors of mortality among HIV positive children on antiretroviral therapy (ART) in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, treatment, Ethiopia
Procedia PDF Downloads 52157 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions
Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani
Abstract:
Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.Keywords: curing, durability, maturity, strength
Procedia PDF Downloads 306156 Descriptive Epidemiology of Diphtheria Outbreak Data, Taraba State, Nigeria, August-November 2023
Authors: Folajimi Oladimeji Shorunke
Abstract:
Background: As of October 9, 2023, diphtheria has been noted to be re-emerging in four African countries: Algeria, Guinea, Niger, and Nigeria. 14,587 cases with a case fatality rate of 4.1% have been reported across these regions, with Nigeria alone responsible for over 90% of the cases. In Taraba State Nigeria, the index case of Diphtheria was reported on epidemic week 34, August 24, 2023 with 75 confirmed cases found 3 months after the index case and a case fatality of 1.3%. it described the distribution, trend and common symptoms found during the Outbreak. Methods: The Taraba State Diphtheria Outbreak line list on the Surveillance Outbreak Response Management & Analysis System (SORMAS) for all its 16 local government areas (LGAs) was analyzed using descriptive statistics (graphs, chats and maps) for the period between 24th August to 25th November 2023. Primary data was collected through the use of case investigation forms and variables like Age, gender, date of disease onset, LGA of residence, and symptoms exhibited were collected. Naso-pharyngeal and oro-pharyngeal samples were also collected for Laboratory confirmation. The most common diphtheria symptoms during the outbreak were also highlighted. Results: A total of 75 Diphtheria cases were diagnosed in 10 of the 16 LGAs in Taraba State between 24th August to 25th November 2023, 72% of the cases were female, with the age range 0-9 years having the highest proportion of 34 (45.3%), the number of positive diagnosis reduces with age among cases. The Northern part of the State had the highest proportion of cases, 68 (90.7%), with Ardo-Kola LGA having the highest 28 (29%). The remaining 9.2% of cases is shared among the middle belt and southern part of the State. The Epi-curve took the characteristic shape of a propagated infection with peaks at the 37th, 39th and 45th epidemic weeks. The most common symptoms found in cases were fever 71 (94.7%), pharyngitis 65( 86.7%), tonsillitis 60 (80%), and laryngitis 53 (71%). Conclusions: The number of confirmed cases of Diphtheria in Taraba State, Nigeria between 24th August to 25th November 2023 is 75. The condition is higher among females than male and mostly affected children between ages 0-9 with the northern part of the state most affected. The most common symptoms exhibited by cases include fever, pharyngitis, tonsillitis and laryngitis.Keywords: diphtheria outbreak, taraba nigeria, descriptive epidemiology, trend
Procedia PDF Downloads 70155 QSAR Study on Diverse Compounds for Effects on Thermal Stability of a Monoclonal Antibody
Authors: Olubukayo-Opeyemi Oyetayo, Oscar Mendez-Lucio, Andreas Bender, Hans Kiefer
Abstract:
The thermal melting curve of a protein provides information on its conformational stability and could provide cues on its aggregation behavior. Naturally-occurring osmolytes have been shown to improve the thermal stability of most proteins in a concentration-dependent manner. They are therefore commonly employed as additives in therapeutic protein purification and formulation. A number of intertwined and seemingly conflicting mechanisms have been put forward to explain the observed stabilizing effects, the most prominent being the preferential exclusion mechanism. We attempted to probe and summarize molecular mechanisms for thermal stabilization of a monoclonal antibody (mAb) by developing quantitative structure-activity relationships using a rationally-selected library of 120 osmolyte-like compounds in the polyhydric alcohols, amino acids and methylamines classes. Thermal stabilization potencies were experimentally determined by thermal shift assays based on differential scanning fluorimetry. The cross-validated QSAR model was developed by partial least squares regression using descriptors generated from Molecular Operating Environment software. Careful evaluation of the results with the use of variable importance in projection parameter (VIP) and regression coefficients guided the selection of the most relevant descriptors influencing mAb thermal stability. For the mAb studied and at pH 7, the thermal stabilization effects of tested compounds correlated positively with their fractional polar surface area and inversely with their fractional hydrophobic surface area. We cannot claim that the observed trends are universal for osmolyte-protein interactions because of protein-specific effects, however this approach should guide the quick selection of (de)stabilizing compounds for a protein from a chemical library. Further work with a large variety of proteins and at different pH values would help the derivation of a solid explanation as to the nature of favorable osmolyte-protein interactions for improved thermal stability. This approach may be beneficial in the design of novel protein stabilizers with optimal property values, especially when the influence of solution conditions like the pH and buffer species and the protein properties are factored in.Keywords: thermal stability, monoclonal antibodies, quantitative structure-activity relationships, osmolytes
Procedia PDF Downloads 333154 Cover Layer Evaluation in Soil Organic Matter of Mixing and Compressed Unsaturated
Authors: Nayara Torres B. Acioli, José Fernando T. Jucá
Abstract:
The uncontrolled emission of gases in urban residues' embankment located near urban areas is a social and environmental problem, common in Brazilian cities. Several environmental impacts in the local and global scope may be generated by atmospheric air contamination by the biogas resulted from the decomposition of solid urban materials. In Brazil, the cities of small size figure mostly with 90% of all cities, with the population smaller than 50,000 inhabitants, according to the 2011 IBGE' census, most of the landfill covering layer is composed of clayey, pure soil. The embankments undertaken with pure soil may reach up to 60% of retention of methane, for the other 40% it may be dispersed into the atmosphere. In face of this figures the oxidative covering layer is granted some space of study, envisaging to reduce this perceptual available in the atmosphere, releasing, in spite of methane, carbonic gas which is almost 20 times as less polluting than Methane. This paper exposes the results of studies on the characteristics of the soil used for the oxidative coverage layer of the experimental embankment of Solid Urban Residues (SUR), built in Muribeca-PE, Brazil, supported of the Group of Solid Residues (GSR), located at Federal University of Pernambuco, through laboratory vacuum experiments (determining the characteristics curve), granularity, and permeability, that in soil with saturation over 85% offers dramatic drops in the test of permeability to the air, by little increments of water, based in the existing Brazilian norm for this procedure. The suction was studied, as in the other tests, from the division of prospection of an oxidative coverage layer of 60cm, in the upper half (0.1 m to 0.3 m) and lower half (0.4 m to 0.6 m). Therefore, the consequences to be presented from the lixiviation of the fine materials after 5 years of finalization of the embankment, what made its permeability increase. Concerning its humidity, it is most retained in the upper part, that comprises the compound, with a difference in the order of 8 percent the superior half to inferior half, retaining the least suction from the surface. These results reveal the efficiency of the oxidative coverage layer in retaining the rain water, it has a lower cost when compared to the other types of layer, offering larger availability of this layer as an alternative for a solution for the appropriate disposal of residues.Keywords: oxidative coverage layer, permeability, suction, saturation
Procedia PDF Downloads 290153 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 232152 Agronomic Test to Determine the Efficiency of Hydrothermally Treated Alkaline Igneous Rocks and Their Potassium Fertilizing Capacity
Authors: Aaron Herve Mbwe Mbissik, Lotfi Khiari, Otmane Raji, Abdellatif Elghali, Abdelkarim Lajili, Muhammad Ouabid, Martin Jemo, Jean-Louis Bodinier
Abstract:
Potassium (K) is an essential macronutrient for plant growth, helping to regulate several physiological and metabolic processes. Evaporite-related potash salts, mainly sylvite minerals (K chloride or KCl), are the principal source of K for the fertilizer industry. However, due to the high potash-supply risk associated with its considerable price fluctuations and uneven geographic distribution for most agriculture-based developing countries, the development of alternative sources of fertilizer K is imperative to maintain adequate crop yield, reduce yield gaps, and food security. Alkaline Igneous rocks containing significant K-rich silicate minerals such as K feldspar are increasingly seen as the best alternative available. However, these rocks may require to be hydrothermally treatment to enhance the release of potassium. In this study, we evaluate the fertilizing capacity of raw and hydrothermally treated K-bearing silicate rocks from different areas in Morocco. The effectiveness of rock powders was tested in a greenhouse experiment using ryegrass (Lolium multiflorum) by comparing them to a control (no K added) and to a conventional fertilizer (muriate of potash: MOP or KCl). The trial was conducted in a randomized complete block design with three replications, and plants were grown on K-depleted soils for three growing cycles. To achieve our objective, in addition to the analysis of the muriate response curve and the different biomasses, we also examined three necessary coefficients, namely: the K uptake, then apparent K recovery (AKR), and the relative K efficiency (RKE). The results showed that based on the optimum economic rate of MOP (230 kg.K.ha⁻¹) and the optimum yield (44 000 kg.K.ha⁻¹), the efficiency of K silicate rocks was as high as that of MOP. Although the plants took up only half of the K supplied by the powdered rock, the hydrothermal material was found to be satisfactory, with a biomass value reaching the optimum economic limit until the second crop cycle. In comparison, the AKR of the MOP (98.6%) and its RKE in the 1st cycle were higher than our materials: 39% and 38%, respectively. Therefore, the raw and hydrothermal materials mixture could be an appropriate solution for long-term agronomic use based on the obtained results.Keywords: K-uptake, AKR, RKE, K-bearing silicate rock, MOP
Procedia PDF Downloads 91151 Modelling Spatial Dynamics of Terrorism
Authors: André Python
Abstract:
To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling
Procedia PDF Downloads 351150 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 120