Search results for: Cahn-Hilliard Navier-Stokes (CHNS) equations
138 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival
Procedia PDF Downloads 334137 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector
Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar
Abstract:
Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.Keywords: appliances efficiency improvement, energy star, market penetration, residential sector
Procedia PDF Downloads 285136 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 221135 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 227134 Forecasting Regional Data Using Spatial Vars
Authors: Taisiia Gorshkova
Abstract:
Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regionsKeywords: forecasting, regional data, spatial econometrics, vector autoregression
Procedia PDF Downloads 141133 Prediction of Fluid Induced Deformation using Cavity Expansion Theory
Authors: Jithin S. Kumar, Ramesh Kannan Kandasami
Abstract:
Geomaterials are generally porous in nature due to the presence of discrete particles and interconnected voids. The porosity present in these geomaterials play a critical role in many engineering applications such as CO2 sequestration, well bore strengthening, enhanced oil and hydrocarbon recovery, hydraulic fracturing, and subsurface waste storage. These applications involves solid-fluid interactions, which govern the changes in the porosity which in turn affect the permeability and stiffness of the medium. Injecting fluid into the geomaterials results in permeation which exhibits small or negligible deformation of the soil skeleton followed by cavity expansion/ fingering/ fracturing (different forms of instabilities) due to the large deformation especially when the flow rate is greater than the ability of the medium to permeate the fluid. The complexity of this problem increases as the geomaterial behaves like a solid and fluid under certain conditions. Thus it is important to understand this multiphysics problem where in addition to the permeation, the elastic-plastic deformation of the soil skeleton plays a vital role during fluid injection. The phenomenon of permeation and cavity expansion in porous medium has been studied independently through extensive experimental and analytical/ numerical models. The analytical models generally use Darcy's/ diffusion equations to capture the fluid flow during permeation while elastic-plastic (Mohr-Coulomb and Modified Cam-Clay) models were used to predict the solid deformations. Hitherto, the research generally focused on modelling cavity expansion without considering the effect of injected fluid coming into the medium. Very few studies have considered the effect of injected fluid on the deformation of soil skeleton. However, the porosity changes during the fluid injection and coupled elastic-plastic deformation are not clearly understood. In this study, the phenomenon of permeation and instabilities such as cavity and finger/ fracture formation will be quantified extensively by performing experiments using a novel experimental setup in addition to utilizing image processing techniques. This experimental study will describe the fluid flow and soil deformation characteristics under different boundary conditions. Further, a well refined coupled semi-analytical model will be developed to capture the physics involved in quantifying the deformation behaviour of geomaterial during fluid injection.Keywords: solid-fluid interaction, permeation, poroelasticity, plasticity, continuum model
Procedia PDF Downloads 73132 An Impregnated Active Layer Mode of Solution Combustion Synthesis as a Tool for the Solution Combustion Mechanism Investigation
Authors: Zhanna Yermekova, Sergey Roslyakov
Abstract:
Solution combustion synthesis (SCS) is the unique method which multiple times has proved itself as an effective and efficient approach for the versatile synthesis of a variety of materials. It has significant advantages such as relatively simple handling process, high rates of product synthesis, mixing of the precursors on a molecular level, and fabrication of the nanoproducts as a result. Nowadays, an overwhelming majority of solution combustion investigations performed through the volume combustion synthesis (VCS) where the entire liquid precursor is heated until the combustion self-initiates throughout the volume. Less amount of the experiments devoted to the steady-state self-propagating mode of SCS. Under the beforementioned regime, the precursor solution is dried until the gel-like media, and later on, the gel substance is locally ignited. In such a case, a combustion wave propagates in a self-sustaining mode as in conventional solid combustion synthesis. Even less attention is given to the impregnated active layer (IAL) mode of solution combustion. An IAL approach to the synthesis is implying that the solution combustion of the precursors should be initiated on the surface of the third chemical or inside the third substance. This work is aiming to emphasize an underestimated role of the impregnated active layer mode of the solution combustion synthesis for the fundamental studies of the combustion mechanisms. It also serves the purpose of popularizing the technical terms and clarifying the difference between them. In order to do so, the solution combustion synthesis of γ-FeNi (PDF#47-1417) alloy has been accomplished within short (seconds) one-step reaction of metal precursors with hexamethylenetetramine (HTMA) fuel. An idea of the special role of the Ni in a process of alloy formation was suggested and confirmed with the particularly organized set of experiments. The first set of experiments were conducted in a conventional steady-state self-propagating mode of SCS. An alloy was synthesized as a single monophasic product. In two other experiments, the synthesis was divided into two independent processes which are possible under the IAL mode of solution combustion. The sequence of the process was changed according to the equations which are describing an Experiment A and B below: Experiment A: Step 1. Fe(NO₃)₃*9H₂O + HMTA = FeO + gas products; Step 2. FeO + Ni(NO₃)₂*6H₂O + HMTA = Ni + FeO + gas products; Experiment B: Step 1. Ni(NO₃)₂*6H₂O + HMTA = Ni + gas products; Step 2. Ni + Fe(NO₃)₃*9H₂O + HMTA = Fe₃Ni₂+ traces (Ni + FeO). Based on the IAL experiment results, one can see that combustion of the Fe(NO₃)₃9H₂O on the surface of the Ni is leading to the alloy formation while presence of the already formed FeO does not affect the Ni(NO₃)₂*6H₂O + HMTA reaction in any way and Ni is the main product of the synthesis.Keywords: alloy, hexamethylenetetramine, impregnated active layer mode, mechanism, solution combustion synthesis
Procedia PDF Downloads 134131 Proposed Design of an Optimized Transient Cavity Picosecond Ultraviolet Laser
Authors: Marilou Cadatal-Raduban, Minh Hong Pham, Duong Van Pham, Tu Nguyen Xuan, Mui Viet Luong, Kohei Yamanoi, Toshihiko Shimizu, Nobuhiko Sarukura, Hung Dai Nguyen
Abstract:
There is a great deal of interest in developing all-solid-state tunable ultrashort pulsed lasers emitting in the ultraviolet (UV) region for applications such as micromachining, investigation of charge carrier relaxation in conductors, and probing of ultrafast chemical processes. However, direct short-pulse generation is not as straight forward in solid-state gain media as it is for near-IR tunable solid-state lasers such as Ti:sapphire due to the difficulty of obtaining continuous wave laser operation, which is required for Kerr lens mode-locking schemes utilizing spatial or temporal Kerr type nonlinearity. In this work, the transient cavity method, which was reported to generate ultrashort laser pulses in dye lasers, is extended to a solid-state gain medium. Ce:LiCAF was chosen among the rare-earth-doped fluoride laser crystals emitting in the UV region because of its broad tunability (from 280 to 325 nm) and enough bandwidth to generate 3-fs pulses, sufficiently large effective gain cross section (6.0 x10⁻¹⁸ cm²) favorable for oscillators, and a high saturation fluence (115 mJ/cm²). Numerical simulations are performed to investigate the spectro-temporal evolution of the broadband UV laser emission from Ce:LiCAF, represented as a system of two homogeneous broadened singlet states, by solving the rate equations extended to multiple wavelengths. The goal is to find the appropriate cavity length and Q-factor to achieve the optimal photon cavity decay time and pumping energy for resonator transients that will lead to ps UV laser emission from a Ce:LiCAF crystal pumped by the fourth harmonics (266nm) of a Nd:YAG laser. Results show that a single ps pulse can be generated from a 1-mm, 1 mol% Ce³⁺-doped LiCAF crystal using an output coupler with 10% reflectivity (low-Q) and an oscillator cavity that is 2-mm long (short cavity). This technique can be extended to other fluoride-based solid-state laser gain media.Keywords: rare-earth-doped fluoride gain medium, transient cavity, ultrashort laser, ultraviolet laser
Procedia PDF Downloads 357130 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 86129 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding
Authors: Wenya Shu, Ilinca Stanciulescu
Abstract:
Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding
Procedia PDF Downloads 129128 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System
Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu
Abstract:
Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model
Procedia PDF Downloads 111127 Virtual Reality for Chemical Engineering Unit Operations
Authors: Swee Kun Yap, Sachin Jangam, Suraj Vasudevan
Abstract:
Experiential learning is dubbed as a highly effective way to enhance learning. Virtual reality (VR) is thus a helpful tool in providing a safe, memorable, and interactive learning environment. A class of 49 fluid mechanics students participated in starting up a pump, one of the most used equipment in the chemical industry, in VR. They experience the process in VR to familiarize themselves with the safety training and the standard operating procedure (SOP) in guided mode. Students subsequently observe their peers (in groups of 4 to 5) complete the same training. The training first brings each user through the personal protection equipment (PPE) selection, before guiding the user through a series of steps for pump startup. One of the most common feedback given by industries include the weakness of our graduates in pump design and operation. Traditional fluid mechanics is a highly theoretical module loaded with engineering equations, providing limited opportunity for visualization and operation. With VR pump, students can now learn to startup, shutdown, troubleshoot and observe the intricacies of a centrifugal pump in a safe and controlled environment, thereby bridging the gap between theory and practical application. Following the completion of the guided mode operation, students then individually complete the VR assessment for pump startup on the same day, which requires students to complete the same series of steps, without any cues given in VR to test their recollection rate. While most students miss out a few minor steps such as the checking of lubrication oil and the closing of minor drain valves before pump priming, all the students scored full marks in the PPE selection, and over 80% of the students were able to complete all the critical steps that are required to startup a pump safely. The students were subsequently tested for their recollection rate by means of an online quiz 3 weeks later, and it is again found that over 80% of the students were able to complete the critical steps in the correct order. In the survey conducted, students reported that the VR experience has been enjoyable and enriching, and 79.5% of the students voted to include VR as a positive supplementary exercise in addition to traditional teaching methods. One of the more notable feedback is the higher ease of noticing and learning from mistakes as an observer rather than as a VR participant. Thus, the cycling between being a VR participant and an observer has helped tremendously in their knowledge retention. This reinforces the positive impact VR has on learning.Keywords: experiential learning, learning by doing, pump, unit operations, virtual reality
Procedia PDF Downloads 138126 The Prevalence of Obesity among a Huge Sample of 5-20 Years Old Jordanian Children and Adolescents Based on CDC Criteria
Authors: Walid Al-Qerem, Ruba Zumot
Abstract:
Background: The rise of obesity among children and adolescents remains a primary challenge for healthcare providers globally and in the Middle East. The aim of the present study is to determine the prevalence of obesity among 5-20 years old Jordanians based on CDC criteria. Method: A total of 5722 Jordanians (37% males; 63% females) aged 5-20 years data were retrieved from the Jordanian Ministry of Health electronic database (Hakeem). As per the CDC selection criteria, the chosen data pertains exclusively to healthy Jordanian children and adolescents who are medically sound, not suffering from health conditions, and not undergoing any treatments that could hinder normal growth patterns, such as severe infection, chronic kidney disease (CKD), Down’s syndrome, attention deficit hyperactivity disorder, cancer, heart disease, lung disease, cystic fibrosis, Crohn’s disease, type 1 diabetes, hormonal disturbances, any stress-related conditions, hormonal therapy such as corticosteroids, Growth hormones (GHS) or gonadotropin-releasing hormone agonists, insulin, and amphetamines or any other stimulants. In addition, participants with missing or invalid data values for anthropometric measurements were excluded from the study. Weight for age and body mass index for age were analyzed comparatively for Jordanian children and adolescents against the international growth standards. The Z-score for each record was computed based on CDC equations. As per CDC classifications, BMI for age percentiles, values ≥85th and < 95th are classified as overweight, and value at ≥ 95th is classified as obesity. Results: The average age of the evaluated sample was 12.33 ±4.39 years (10.79 ±3.39 for males and 13.23 ± 4.66 for females). The mean weight for males and females were 33.16±14.17 Kg and 133.54±17.17 cm for males, 43.86 ±18.82 Kg, and 142.19±18.35 for females, while for BMI the mean was for boys and girls 17.81±3.88 and 20.52±5.03 respectively. The results indicated that based on CDC criteria, 8.9% of males were classified as children/adolescents with overweight, and 9.7% were classified as children/adolescents with obesity, while in females, 17.8% were classified as children/adolescents with overweight and 10.2% were classified as children/adolescents with obesity. Discussion: The high prevalence of obesity reported in the present study emphasizes the importance of applying different strategies to prevent childhood obesity, including encouraging physical activity, promoting healthier food options, and behavioral changes. Conclusion: The results presented in this study indicated the high prevalence of overweight/obesity among Jordanian adolescents and children, which must be tagged by healthcare planners and providers.Keywords: CDC, obesity, childhood, Jordan
Procedia PDF Downloads 57125 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model
Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge
Abstract:
Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model
Procedia PDF Downloads 131124 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper
Authors: A. F. Momin, D. V. Khakhar
Abstract:
Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers
Procedia PDF Downloads 87123 Effects of Nutrients Supply on Milk Yield, Composition and Enteric Methane Gas Emissions from Smallholder Dairy Farms in Rwanda
Authors: Jean De Dieu Ayabagabo, Paul A.Onjoro, Karubiu P. Migwi, Marie C. Dusingize
Abstract:
This study investigated the effects of feed on milk yield and quality through feed monitoring and quality assessment, and the consequent enteric methane gas emissions from smallholder dairy farms in drier areas of Rwanda, using the Tier II approach for four seasons in three zones, namely; Mayaga and peripheral Bugesera (MPB), Eastern Savanna and Central Bugesera (ESCB), and Eastern plateau (EP). The study was carried out using 186 dairy cows with a mean live weight of 292 Kg in three communal cowsheds. The milk quality analysis was carried out on 418 samples. Methane emission was estimated using prediction equations. Data collected were subjected to ANOVA. The dry matter intake was lower (p<0.05) in the long dry season (7.24 Kg), with the ESCB zone having the highest value of 9.10 Kg, explained by the practice of crop-livestock integration agriculture in that zone. The Dry matter digestibility varied between seasons and zones, ranging from 52.5 to 56.4% for seasons and from 51.9 to 57.5% for zones. The daily protein supply was higher (p<0.05) in the long rain season with 969 g. The mean daily milk production of lactating cows was 5.6 L with a lower value (p<0.05) during the long dry season (4.76 L), and the MPB zone having the lowest value of 4.65 L. The yearly milk production per cow was 1179 L. The milk fat varied from 3.79 to 5.49% with a seasonal and zone variation. No variation was observed with milk protein. The seasonal daily methane emission varied from 150 g for the long dry season to 174 g for the long rain season (p<0.05). The rain season had the highest methane emission as it is associated with high forage intake. The mean emission factor was 59.4 Kg of methane/year. The present EFs were higher than the default IPPC value of 41 Kg from developing countries in African, the Middle East, and other tropical regions livestock EFs using Tier I approach due to the higher live weight in the current study. The methane emission per unit of milk production was lower in the EP zone (46.8 g/L) due to the feed efficiency observed in that zone. Farmers should use high-quality feeds to increase the milk yield and reduce the methane gas produced per unit of milk. For an accurate assessment of the methane produced from dairy farms, there is a need for the use of the Life Cycle Assessment approach that considers all the sources of emissions.Keywords: footprint, forage, girinka, tier
Procedia PDF Downloads 205122 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model
Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han
Abstract:
Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model
Procedia PDF Downloads 362121 The Sources of Anti-Immigrant Sentiments in Russia
Authors: Anya Glikman, Anastasia Gorodzeisky
Abstract:
Since the late 1990th labor immigration and its consequences on the society have become one of the most frequently discussed and debated issues in Russia. Social scientists point that the negative attitudes towards immigrants among Russian majority population is widespread, and their level, at least, twice as high as their level in most other European countries. Moreover, recent study by Gorodzeisky, Glikman and Maskyleison (2014) demonstrates that the two sets of individual level predictors of anti-foreigner sentiment – socio-economic status and conservative views and ideologies – that have been repeatedly proved in research in Western countries are not effective in predicting of anti-foreigner sentiment in Post-Socialist Russia. Apparently, the social mechanisms underlying anti-foreigner sentiment in Western countries, which are characterized by stable regimes and relatively long immigration histories, do not play a significant role in the explanation of anti-foreigner sentiment in Post-Socialist Russia. The present study aims to examine alternative possible sources of anti-foreigner sentiment in Russia while controlling for socio-economic position of individuals and conservative views. More specifically, following the research literature on the topic worldwide, we aim to examine whether and to what extent human values (such as tradition, universalism, safety and power), ethnic residential segregation, fear of crime and exposure to mass media affect anti-foreigner sentiments in Russia. To do so, we estimate a series of multivariate regression equations using the data obtained from 2012 European Social Survey. The national representative sample consists of 2337 Russian born respondents. Descriptive results reveal that about 60% percent of Russians view the impact of immigrants on the country in negative terms. Further preliminary analysis show that anti-foreigner sentiments are associated with exposer to mass media as well as with fear of crime. Specifically, respondents who devoted more time watching news on TV channels and respondents who express higher levels of fear of crime tend to report higher levels of anti-immigrants sentiments. The findings would be discussed in light of sociological perspective and the context of Russian society.Keywords: anti-immigrant sentiments, fear of crime, human values, mass media, Russia
Procedia PDF Downloads 466120 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics
Authors: Janne Engblom, Elias Oikarinen
Abstract:
A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.Keywords: dynamic model, fixed effects, panel data, price dynamics
Procedia PDF Downloads 1507119 Winkler Springs for Embedded Beams Subjected to S-Waves
Authors: Franco Primo Soffietti, Diego Fernando Turello, Federico Pinto
Abstract:
Shear waves that propagate through the ground impose deformations that must be taken into account in the design and assessment of buried longitudinal structures such as tunnels, pipelines, and piles. Conventional engineering approaches for seismic evaluation often rely on a Euler-Bernoulli beam models supported by a Winkler foundation. This approach, however, falls short in capturing the distortions induced when the structure is subjected to shear waves. To overcome these limitations, in the present work an analytical solution is proposed considering a Timoshenko beam and including transverse and rotational springs. The present research proposes ground springs derived as closed-form analytical solutions of the equations of elasticity including the seismic wavelength. These proposed springs extend the applicability of previous plane-strain models. By considering variations in displacements along the longitudinal direction, the presented approach ensures the springs do not approach zero at low frequencies. This characteristic makes them suitable for assessing pseudo-static cases, which typically govern structural forces in kinematic interaction analyses. The results obtained, validated against existing literature and a 3D Finite Element model, reveal several key insights: i) the cutoff frequency significantly influences transverse and rotational springs; ii) neglecting displacement variations along the structure axis (i.e., assuming plane-strain deformation) results in unrealistically low transverse springs, particularly for wavelengths shorter than the structure length; iii) disregarding lateral displacement components in rotational springs and neglecting variations along the structure axis leads to inaccurately low spring values, misrepresenting interaction phenomena; iv) transverse springs exhibit a notable drop in resonance frequency, followed by increasing damping as frequency rises; v) rotational springs show minor frequency-dependent variations, with radiation damping occurring beyond resonance frequencies, starting from negative values. This comprehensive analysis sheds light on the complex behavior of embedded longitudinal structures when subjected to shear waves and provides valuable insights for the seismic assessment.Keywords: shear waves, Timoshenko beams, Winkler springs, sol-structure interaction
Procedia PDF Downloads 61118 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 77117 The Link between Anthropometry and Fat-Based Obesity Indices in Pediatric Morbid Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Anthropometric measurements are essential for obesity studies. Waist circumference (WC) is the most frequently used measure, and along with hip circumference (HC), it is used in most equations derived for the evaluation of obese individuals. Morbid obesity is the most severe clinical form of obesity, and such individuals may also exhibit some clinical findings leading to metabolic syndrome (MetS). Then, it becomes a requirement to discriminate morbid obese children with (MOMetS+) and without (MOMetS-) MetS. Almost all obesity indices can differentiate obese (OB) children from children with normal body mass index (N-BMI). However, not all of them are capable of making this distinction. A recently introduced anthropometric obesity index, waist circumference + hip circumference/2 ((WC+HC)/2), was confirmed to differ OB children from those with N-BMI, however it has not been tested whether it will find clinical usage for the differential diagnosis of MOMetS+ and MOMetS-. This study was designed to find out the availability of (WC+HC)/2 for the purpose and to compare the possible preponderance of it over some other anthropometric or fat-based obesity indices. Forty-five MOMetS+ and forty-five MOMetS- children were included in the study. Participants have submitted informed consent forms. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Anthropometric measurements were performed. Body mass index (BMI), waist-to-hip circumference (W/H), (WC+HC)/2, trunk-to-leg fat ratio (TLFR), trunk-to-appendicular fat ratio (TAFR), trunk fat+leg fat/2 ((trunk+leg fat)/2), diagnostic obesity notation model assessment index-2 (D2I) and fat mass index (FMI) were calculated for both groups. Study data was analyzed statistically, and 0.05 for p value was accepted as the statistical significance degree. Statistically higher BMI, WC, (WC+HC)/2, (trunk+leg fat)/2 values were found in MOMetS+ children than MOMetS- children. No statistically significant difference was detected for W/H, TLFR, TAFR, D2I, and FMI between two groups. The lack of difference between the groups in terms of FMI and D2I pointed out the fact that the recently developed fat-based index; (trunk+leg fat)/2 gives much more valuable information during the evaluation of MOMetS+ and MOMetS- children. Upon evaluation of the correlations, (WC+HC)/2 was strongly correlated with D2I and FMI in both MOMetS+ and MOMetS- groups. Neither D2I nor FMI was correlated with W/H. Strong correlations were calculated between (WC+HC)/2 and (trunk+leg fat)/2 in both MOMetS- (r=0.961; p<0.001) and MOMetS+ (r=0.936; p<0.001) groups. Partial correlations between (WC+HC)/2 and (trunk+leg fat)/2 after controlling the effect of basal metabolic rate were r=0.726; p<0.001 in MOMetS- group and r=0.932; p<0.001 in MOMetS+ group. The correlation in the latter group was higher than the first group. In conclusion, recently developed anthropometric obesity index (WC+HC)/2 and fat-based obesity index (trunk+leg fat)/2 were of preponderance over the previously introduced classical obesity indices such as W/H, D2I and FMI during the differential diagnosis of MOMetS+ and MOMetS- children.Keywords: children, hip circumference, metabolic syndrome, morbid obesity, waist circumference
Procedia PDF Downloads 289116 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements
Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo
Abstract:
Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation
Procedia PDF Downloads 178115 Determination of Activation Energy for Thermal Decomposition of Selected Soft Tissues Components
Authors: M. Ekiert, T. Uhl, A. Mlyniec
Abstract:
Tendons are the biological soft tissue structures composed of collagen, proteoglycan, glycoproteins, water and cells of extracellular matrix (ECM). Tendons, which primary function is to transfer force generated by the muscles to the bones causing joints movement, are exposed to many micro and macro damages. In fact, tendons and ligaments trauma are one of the most numerous injuries of human musculoskeletal system, causing for many people (particularly for athletes and physically active people), recurring disorders, chronic pain or even inability of movement. The number of tendons reconstruction and transplantation procedures is increasing every year. Therefore, studies on soft tissues storage conditions (influencing i.e. tissue aging) seem to be an extremely important issue. In this study, an atomic-scale investigation on the kinetics of decomposition of two selected tendon components – collagen type I (which forms a 60-85% of a tendon dry mass) and elastin protein (which combine with ECM creates elastic fibers of connective tissues) is presented. A molecular model of collagen and elastin was developed based on crystal structure of triple-helical collagen-like 1QSU peptide and P15502 human elastin protein, respectively. Each model employed 4 linear strands collagen/elastin strands per unit cell, distributed in 2x2 matrix arrangement, placed in simulation box filled with water molecules. A decomposition phenomena was simulated with molecular dynamics (MD) method using ReaxFF force field and periodic boundary conditions. A set of NVT-MD runs was performed for 1000K temperature range in order to obtained temperature-depended rate of production of decomposition by-products. Based on calculated reaction rates activation energies and pre-exponential factors, required to formulate Arrhenius equations describing kinetics of decomposition of tested soft tissue components, were calculated. Moreover, by adjusting a model developed for collagen, system scalability and correct implementation of the periodic boundary conditions were evaluated. An obtained results provide a deeper insight into decomposition of selected tendon components. A developed methodology may also be easily transferred to other connective tissue elements and therefore might be used for further studies on soft tissues aging.Keywords: decomposition, molecular dynamics, soft tissue, tendons
Procedia PDF Downloads 210114 mHealth-based Diabetes Prevention Program among Mothers with Abdominal Obesity: A Randomized Controlled Trial
Authors: Jia Guo, Qinyuan Huang, Qinyi Zhong, Yanjing Zeng, Yimeng Li, James Wiley, Kin Cheung, Jyu-Lin Chen
Abstract:
Context: Mothers with abdominal obesity, particularly in China, face challenges in managing their health due to family responsibilities. Existing diabetes prevention programs do not cater specifically to this demographic. Research Aim: To assess the feasibility, acceptability, and efficacy of an mHealth-based diabetes prevention program tailored for Chinese mothers with abdominal obesity in reducing weight-related variables and diabetes risk. Methodology: A randomized controlled trial was conducted in Changsha, China, where the mHealth group received personalized modules and health messages, while the control group received general health education. Data were collected at baseline, 3 months, and 6 months. Findings: The mHealth intervention significantly improved waist circumference, modifiable diabetes risk scores, daily steps, self-efficacy for physical activity, social support for physical activity, and physical health satisfaction compared to the control group. However, no differences were found in BMI and certain other variables. Theoretical Importance: The study demonstrates the feasibility and efficacy of a tailored mHealth intervention for Chinese mothers with abdominal obesity, emphasizing the potential for such programs to improve health outcomes in this population. Data Collection: Data on various variables including weight-related measures, diabetes risk scores, behavioral and psychological factors were collected at baseline, 3 months, and 6 months from participants in the mHealth and control groups. Analysis Procedures: Generalized estimating equations were used to analyze the data collected from the mHealth and control groups at different time points during the study period. Question Addressed: The study addressed the effectiveness of an mHealth-based diabetes prevention program tailored for Chinese mothers with abdominal obesity in improving various health outcomes compared to traditional general health education approaches. Conclusion: The tailored mHealth intervention proved to be feasible and effective in improving weight-related variables, physical activity, and physical health satisfaction among Chinese mothers with abdominal obesity, highlighting its potential for delivering diabetes prevention programs to this population.Keywords: type 2 diabetes, mHealth, obesity, prevention, mothers
Procedia PDF Downloads 57113 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 341112 The Aromaticity of P-Substituted O-(N-Dialkyl)Aminomethylphenols
Authors: Khodzhaberdi Allaberdiev
Abstract:
Aromaticity, one of the most important concepts in organic chemistry, has attracted considerable interest from both experimentalists and theoreticians. The geometry optimization of p-substituted o-(N-dialkyl)aminomethylphenols, o-DEAMPH XC₆ H₅CH ₂Y (X=p-OCH₃, CH₃, H, F, Cl, Br, COCH₃, COOCH₃, CHO, CN and NO₂, Y=o-N (C₂H₅)₂, o-DEAMPHs have been performed in the gas phase using the B3LYP/6-311+G(d,p) level. Aromaticities of the considered molecules were investigated using different indices included geometrical (HOMA and Bird), electronic (FLU, PDI and SA) magnetic (NICS(0), NICS(1) and NICS(1)zz indices. The linear dependencies were obtained between some aromaticity indices. The best correlation is observed between the Bird and PDI indices (R² =0.9240). However, not all types of indices or even different indices within the same type correlate well among each other. Surprisingly, for studied molecules in which geometrical and electronic cannot correctly give the aromaticity of ring, the magnetism based index successfully predicts the aromaticity of systems. 1H NMR spectra of compounds were obtained at B3LYP/6–311+G(d,p) level using the GIAO method. Excellent linear correlation (R²= 0.9996) between values the chemical shift of hydrogen atom obtained experimentally of 1H NMR and calculated using B3LYP/6–311+G(d,p) demonstrates a good assignment of the experimental values chemical shift to the calculated structures of o-DEAMPH. It is found that the best linear correlation with the Hammett substituent constants is observed for the NICS(1)zz index in comparison with the other indices: NICS(1)zz =-21.5552+1,1070 σp- (R²=0.9394). The presence intramolecular hydrogen bond in the studied molecules also revealed changes the aromatic character of substituted o-DEAMPHs. The HOMA index predicted for R=NO2 the reduction in the π-electron delocalization of 3.4% was about double that observed for p-nitrophenol. The influence intramolecular H-bonding on aromaticity of benzene ring in the ground state (S0) are described by equations between NICS(1)zz and H-bond energies: experimental, Eₑₓₚ, predicted IR spectroscopical, Eν and topological, EQTAIM with correlation coefficients R² =0.9666, R² =0.9028 and R² =0.8864, respectively. The NICS(1)zz index also correlates with usual descriptors of the hydrogen bond, while the other indices do not give any meaningful results. The influence of the intramolecular H-bonding formation on the aromaticity of some substituted o-DEAMPHs is criteria to consider the multidimensional character of aromaticity. The linear relationships as well as revealed between NICS(1)zz and both pyramidality nitrogen atom, ΣN(C₂H₅)₂ and dihedral angle, φ CAr – CAr -CCH₂ –N, to characterizing out-of-plane properties.These results demonstrated the nonplanar structure of o-DEAMPHs. Finally, when considering dependencies of NICS(1)zz, were excluded data for R=H, because the NICS(1) and NICS(1)zz values are the most negative for unsubstituted DEAMPH, indicating its highest aromaticity; that was not the case for NICS(0) index.Keywords: aminomethylphenols, DFT, aromaticity, correlations
Procedia PDF Downloads 181111 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database
Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani
Abstract:
The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.Keywords: residual analysis, GMPE, western balkan, strong motion, openquake
Procedia PDF Downloads 88110 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method
Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren
Abstract:
In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.Keywords: floating body, fluid structure interaction, MPS, particle method, waves
Procedia PDF Downloads 75109 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 91