Search results for: restructuringdigital factory model
9128 Magnetohemodynamic of Blood Flow Having Impact of Radiative Flux Due to Infrared Magnetic Hyperthermia: Spectral Relaxation Approach
Authors: Ebenezer O. Ige, Funmilayo H. Oyelami, Joshua Olutayo-Irheren, Joseph T. Okunlola
Abstract:
Hyperthermia therapy is an adjuvant procedure during which perfused body tissues is subjected to elevated range of temperature in bid to achieve improved drug potency and efficacy of cancer treatment. While a selected class of hyperthermia techniques is shouldered on the thermal radiations derived from single-sourced electro-radiation measures, there are deliberations on conjugating dual radiation field sources in an attempt to improve the delivery of therapy procedure. This paper numerically explores the thermal effectiveness of combined infrared hyperemia having nanoparticle recirculation in the vicinity of imposed magnetic field on subcutaneous strata of a model lesion as ablation scheme. An elaborate Spectral relaxation method (SRM) was formulated to handle equation of coupled momentum and thermal equilibrium in the blood-perfused tissue domain of a spongy fibrous tissue. Thermal diffusion regimes in the presence of external magnetic field imposition were described leveraging on the renowned Roseland diffusion approximation to delineate the impact of radiative flux within the computational domain. The contribution of tissue sponginess was examined using mechanics of pore-scale porosity over a selected of clinical informed scenarios. Our observations showed for a substantial depth of spongy lesion, magnetic field architecture constitute the control regimes of hemodynamics in the blood-tissue interface while facilitating thermal transport across the depth of the model lesion. This parameter-indicator could be utilized to control the dispensing of hyperthermia treatment in intravenous perfused tissue.Keywords: spectra relaxation scheme, thermal equilibrium, Roseland diffusion approximation, hyperthermia therapy
Procedia PDF Downloads 1209127 3D Numerical Modelling of a Pulsed Pumping Process of a Large Dense Non-Aqueous Phase Liquid Pool: In situ Pilot-Scale Case Study of Hexachlorobutadiene in a Keyed Enclosure
Authors: Q. Giraud, J. Gonçalvès, B. Paris
Abstract:
Remediation of dense non-aqueous phase liquids (DNAPLs) represents a challenging issue because of their persistent behaviour in the environment. This pilot-scale study investigates, by means of in situ experiments and numerical modelling, the feasibility of the pulsed pumping process of a large amount of a DNAPL in an alluvial aquifer. The main compound of the DNAPL is hexachlorobutadiene, an emerging organic pollutant. A low-permeability keyed enclosure was built at the location of the DNAPL source zone in order to isolate a finite undisturbed volume of soil, and a 3-month pulsed pumping process was applied inside the enclosure to exclusively extract the DNAPL. The water/DNAPL interface elevation at both the pumping and observation wells and the cumulated pumped volume of DNAPL were also recorded. A total volume of about 20m³ of purely DNAPL was recovered since no water was extracted during the process. The three-dimensional and multiphase flow simulator TMVOC was used, and a conceptual model was elaborated and generated with the pre/post-processing tool mView. Numerical model consisted of 10 layers of variable thickness and 5060 grid cells. Numerical simulations reproduce the pulsed pumping process and show an excellent match between simulated, and field data of DNAPL cumulated pumped volume and a reasonable agreement between modelled and observed data for the evolution of the water/DNAPL interface elevations at the two wells. This study offers a new perspective in remediation since DNAPL pumping system optimisation may be performed where a large amount of DNAPL is encountered.Keywords: dense non-aqueous phase liquid (DNAPL), hexachlorobutadiene, in situ pulsed pumping, multiphase flow, numerical modelling, porous media
Procedia PDF Downloads 1759126 Evaluation of Hepatic Metabolite Changes for Differentiation Between Non-Alcoholic Steatohepatitis and Simple Hepatic Steatosis Using Long Echo-Time Proton Magnetic Resonance Spectroscopy
Authors: Tae-Hoon Kim, Kwon-Ha Yoon, Hong Young Jun, Ki-Jong Kim, Young Hwan Lee, Myeung Su Lee, Keum Ha Choi, Ki Jung Yun, Eun Young Cho, Yong-Yeon Jeong, Chung-Hwan Jun
Abstract:
Purpose: To assess the changes of hepatic metabolite for differentiation between non-alcoholic steatohepatitis (NASH) and simple steatosis on proton magnetic resonance spectroscopy (1H-MRS) in both humans and animal model. Methods: The local institutional review board approved this study and subjects gave written informed consent. 1H-MRS measurements were performed on a localized voxel of the liver using a point-resolved spectroscopy (PRESS) sequence and hepatic metabolites of alanine (Ala), lactate/triglyceride (Lac/TG), and TG were analyzed in NASH, simple steatosis and control groups. The group difference was tested with the ANOVA and Tukey’s post-hoc tests, and diagnostic accuracy was tested by calculating the area under the receiver operating characteristics (ROC) curve. The associations between metabolic concentration and pathologic grades or non-alcoholic fatty liver disease(NAFLD) activity scores were assessed by the Pearson’s correlation. Results: Patient with NASH showed the elevated Ala(p<0.001), Lac/TG(p < 0.001), TG(p < 0.05) concentration when compared with patients who had simple steatosis and healthy controls. The NASH patients were higher levels in Ala(mean±SEM, 52.5±8.3 vs 2.0±0.9; p < 0.001), Lac/TG(824.0±168.2 vs 394.1±89.8; p < 0.05) than simple steatosis. The area under the ROC curve to distinguish NASH from simple steatosis was 1.00 (95% confidence interval; 1.00, 1.00) with Ala and 0.782 (95% confidence interval; 0.61, 0.96) with Lac/TG. The Ala and Lac/TG levels were well correlated with steatosis grade, lobular inflammation, and NAFLD activity scores. The metabolic changes in human were reproducible to a mice model induced by streptozotocin injection and a high-fat diet. Conclusion: 1H-MRS would be useful for differentiation of patients with NASH and simple hepatic steatosis.Keywords: non-alcoholic fatty liver disease, non-alcoholic steatohepatitis, 1H MR spectroscopy, hepatic metabolites
Procedia PDF Downloads 3289125 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks
Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee
Abstract:
Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)
Procedia PDF Downloads 1159124 Electrohydrodynamic Study of Microwave Plasma PECVD Reactor
Authors: Keltoum Bouherine, Olivier Leroy
Abstract:
The present work is dedicated to study a three–dimensional (3D) self-consistent fluid simulation of microwave discharges of argon plasma in PECVD reactor. The model solves the Maxwell’s equations, continuity equations for charged species and the electron energy balance equation, coupled with Poisson’s equation, and Navier-Stokes equations by finite element method, using COMSOL Multiphysics software. In this study, the simulations yield the profiles of plasma components as well as the charge densities and electron temperature, the electric field, the gas velocity, and gas temperature. The results show that the microwave plasma reactor is outside of local thermodynamic equilibrium.The present work is dedicated to study a three–dimensional (3D) self-consistent fluid simulation of microwave discharges of argon plasma in PECVD reactor. The model solves the Maxwell’s equations, continuity equations for charged species and the electron energy balance equation, coupled with Poisson’s equation, and Navier-Stokes equations by finite element method, using COMSOL Multiphysics software. In this study, the simulations yield the profiles of plasma components as well as the charge densities and electron temperature, the electric field, the gas velocity, and gas temperature. The results show that the microwave plasma reactor is outside of local thermodynamic equilibrium.Keywords: electron density, electric field, microwave plasma reactor, gas velocity, non-equilibrium plasma
Procedia PDF Downloads 3319123 Working with Interpreters: Using Role Play to Teach Social Work Students
Authors: Yuet Wah Echo Yeung
Abstract:
Working with people from minority ethnic groups, refugees and asylum seeking communities who have limited proficiency in the language of the host country often presents a major challenge for social workers. Because of language differences, social workers need to work with interpreters to ensure accurate information is collected for their assessment and intervention. Drawing from social learning theory, this paper discusses how role play was used as an experiential learning exercise in a training session to help social work students develop skills when working with interpreters. Social learning theory posits that learning is a cognitive process that takes place in a social context when people observe, imitate and model others’ behaviours. The roleplay also helped students understand the role of the interpreter and the challenges they may face when they rely on interpreters to communicate with service users and their family. The first part of the session involved role play. A tutor played the role of social worker and deliberately behaved in an unprofessional manner and used inappropriate body language when working alongside the interpreter during a home visit. The purpose of the roleplay is not to provide a positive role model for students to ‘imitate’ social worker’s behaviours. Rather it aims to active and provoke internal thinking process and encourages students to critically consider the impacts of poor practice on relationship building and the intervention process. Having critically reflected on the implications for poor practice, students were then asked to play the role of social worker and demonstrate what good practice should look like. At the end of the session, students remarked that they learnt a lot by observing the good and bad example; it showed them what not to do. The exercise served to remind students how practitioners can easily slip into bad habits and of the importance of respect for the cultural difference when working with people from different cultural backgrounds.Keywords: role play, social learning theory, social work practice, working with interpreters
Procedia PDF Downloads 1819122 Determinants of Investment in Vaca Muerta, Argentina
Authors: Ivan Poza Martínez
Abstract:
The international energy landscape has been significantly affected by the Covid-19 pandemic and te conflict in Ukraine. The Vaca Muerta sedimentary formation in Argentina´s Neuquén province has become a crucial area for energy production, specifically in the shale gas ad shale oil sectors. The massive investment required for theexploitation of this reserve make it essential to understand te determinants of the investment in the upstream sector at both local ad international levels. The aim of this study is to identify the qualitative and quantitative determinants of investment in Vaca Muerta. The research methodolody employs both quantiative ( econometrics ) and qualitative approaches. A linear regression model is used to analyze the impact in non-conventional hydrocarbons. The study highlights that, in addition to quantitative factors, qualitative variables, particularly the design of a regulatory framework, significantly influence the level of the investment in Vaca Muerta. The analysis reveals the importance of attracting both domestic and foreign capital investment. This research contributes to understanding the factors influencing investment inthe Vaca Muerta regioncomapred to other published studies. It emphasizes to role of qualitative varibles, such as regulatory frameworks, in the development of the shale gas and oil sectors. The study uses a combination ofquantitative data , such a investment figures, and qualitative data, such a regulatory frameworks. The data is collected from various rpeorts and industry publications. The linear regression model is used to analyze the relationship between the variables and the investment in Vaca Muerta. The research addresses the question of what factors drive investment in the Vaca Muerta region, both from a quantitative and qualitative perspective. The study concludes that a combination of quantitative and qualitative factors, including the design of a regulatory framework, plays a significant role in attracting investment in Vaca Muerta. It highlights the importance of these determinants in the developmentof the local energy sector and the potential economic benefits for Argentina and the Southern Cone region.Keywords: vaca muerta, FDI, shale gas, shale oil, YPF
Procedia PDF Downloads 609121 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)
Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini
Abstract:
Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria
Procedia PDF Downloads 1059120 Improvement of Environment and Climate Change Canada’s Gem-Hydro Streamflow Forecasting System
Authors: Etienne Gaborit, Dorothy Durnford, Daniel Deacu, Marco Carrera, Nathalie Gauthier, Camille Garnaud, Vincent Fortin
Abstract:
A new experimental streamflow forecasting system was recently implemented at the Environment and Climate Change Canada’s (ECCC) Canadian Centre for Meteorological and Environmental Prediction (CCMEP). It relies on CaLDAS (Canadian Land Data Assimilation System) for the assimilation of surface variables, and on a surface prediction system that feeds a routing component. The surface energy and water budgets are simulated with the SVS (Soil, Vegetation, and Snow) Land-Surface Scheme (LSS) at 2.5-km grid spacing over Canada. The routing component is based on the Watroute routing scheme at 1-km grid spacing for the Great Lakes and Nelson River watersheds. The system is run in two distinct phases: an analysis part and a forecast part. During the analysis part, CaLDAS outputs are used to force the routing system, which performs streamflow assimilation. In forecast mode, the surface component is forced with the Canadian GEM atmospheric forecasts and is initialized with a CaLDAS analysis. Streamflow performances of this new system are presented over 2019. Performances are compared to the current ECCC’s operational streamflow forecasting system, which is different from the new experimental system in many aspects. These new streamflow forecasts are also compared to persistence. Overall, the new streamflow forecasting system presents promising results, highlighting the need for an elaborated assimilation phase before performing the forecasts. However, the system is still experimental and is continuously being improved. Some major recent improvements are presented here and include, for example, the assimilation of snow cover data from remote sensing, a backward propagation of assimilated flow observations, a new numerical scheme for the routing component, and a new reservoir model.Keywords: assimilation system, distributed physical model, offline hydro-meteorological chain, short-term streamflow forecasts
Procedia PDF Downloads 1329119 Pharmacophore-Based Modeling of a Series of Human Glutaminyl Cyclase Inhibitors to Identify Lead Molecules by Virtual Screening, Molecular Docking and Molecular Dynamics Simulation Study
Authors: Ankur Chaudhuri, Sibani Sen Chakraborty
Abstract:
In human, glutaminyl cyclase activity is highly abundant in neuronal and secretory tissues and is preferentially restricted to hypothalamus and pituitary. The N-terminal modification of β-amyloids (Aβs) peptides by the generation of a pyro-glutamyl (pGlu) modified Aβs (pE-Aβs) is an important process in the initiation of the formation of neurotoxic plaques in Alzheimer’s disease (AD). This process is catalyzed by glutaminyl cyclase (QC). The expression of QC is characteristically up-regulated in the early stage of AD, and the hallmark of the inhibition of QC is the prevention of the formation of pE-Aβs and plaques. A computer-aided drug design (CADD) process was employed to give an idea for the designing of potentially active compounds to understand the inhibitory potency against human glutaminyl cyclase (QC). This work elaborates the ligand-based and structure-based pharmacophore exploration of glutaminyl cyclase (QC) by using the known inhibitors. Three dimensional (3D) quantitative structure-activity relationship (QSAR) methods were applied to 154 compounds with known IC50 values. All the inhibitors were divided into two sets, training-set, and test-sets. Generally, training-set was used to build the quantitative pharmacophore model based on the principle of structural diversity, whereas the test-set was employed to evaluate the predictive ability of the pharmacophore hypotheses. A chemical feature-based pharmacophore model was generated from the known 92 training-set compounds by HypoGen module implemented in Discovery Studio 2017 R2 software package. The best hypothesis was selected (Hypo1) based upon the highest correlation coefficient (0.8906), lowest total cost (463.72), and the lowest root mean square deviation (2.24Å) values. The highest correlation coefficient value indicates greater predictive activity of the hypothesis, whereas the lower root mean square deviation signifies a small deviation of experimental activity from the predicted one. The best pharmacophore model (Hypo1) of the candidate inhibitors predicted comprised four features: two hydrogen bond acceptor, one hydrogen bond donor, and one hydrophobic feature. The Hypo1 was validated by several parameters such as test set activity prediction, cost analysis, Fischer's randomization test, leave-one-out method, and heat map of ligand profiler. The predicted features were then used for virtual screening of potential compounds from NCI, ASINEX, Maybridge and Chembridge databases. More than seven million compounds were used for this purpose. The hit compounds were filtered by drug-likeness and pharmacokinetics properties. The selective hits were docked to the high-resolution three-dimensional structure of the target protein glutaminyl cyclase (PDB ID: 2AFU/2AFW) to filter these hits further. To validate the molecular docking results, the most active compound from the dataset was selected as a reference molecule. From the density functional theory (DFT) study, ten molecules were selected based on their highest HOMO (highest occupied molecular orbitals) energy and the lowest bandgap values. Molecular dynamics simulations with explicit solvation systems of the final ten hit compounds revealed that a large number of non-covalent interactions were formed with the binding site of the human glutaminyl cyclase. It was suggested that the hit compounds reported in this study could help in future designing of potent inhibitors as leads against human glutaminyl cyclase.Keywords: glutaminyl cyclase, hit lead, pharmacophore model, simulation
Procedia PDF Downloads 1329118 LHCII Proteins Phosphorylation Changes Involved in the Dark-Chilling Response in Plant Species with Different Chilling Tolerance
Authors: Malgorzata Krysiak, Anna Wegrzyn, Maciej Garstka, Radoslaw Mazur
Abstract:
Under constantly fluctuating environmental conditions, the thylakoid membrane protein network evolved the ability to dynamically respond to changing biotic and abiotic factors. One of the most important protective mechanism is rearrangement of the chlorophyll-protein (CP) complexes, induced by protein phosphorylation. In a temperate climate, low temperature is one of the abiotic stresses that heavily affect plant growth and productivity. The aim of this study was to determine the role of LHCII antenna complex phosphorylation in the dark-chilling response. The study included an experimental model based on dark-chilling at 4 °C of detached chilling sensitive (CS) runner bean (Phaseolus coccineus L.) and chilling tolerant (CT) garden pea (Pisum sativum L.) leaves. This model is well described in the literature as used for the analysis of chilling impact without any additional effects caused by light. We examined changes in thylakoid membrane protein phosphorylation, interactions between phosphorylated LHCII (P-LHCII) and CP complexes, and their impact on the dynamics of photosystem II (PSII) under dark-chilling conditions. Our results showed that the dark-chilling treatment of CS bean leaves induced a substantial increase of phosphorylation of LHCII proteins, as well as changes in CP complexes composition and their interaction with P-LHCII. The PSII photochemical efficiency measurements showed that in bean, PSII is overloaded with light energy, which is not compensated by CP complexes rearrangements. On the contrary, no significant changes in PSII photochemical efficiency, phosphorylation pattern and CP complexes interactions were observed in CT pea. In conclusion, our results indicate that different responses of the LHCII phosphorylation to chilling stress take place in CT and CS plants, and that kinetics of LHCII phosphorylation and interactions of P-LHCII with photosynthetic complexes may be crucial to chilling stress response. Acknowledgments: presented work was financed by the National Science Centre, Poland grant No.: 2016/23/D/NZ3/01276Keywords: LHCII, phosphorylation, chilling stress, pea, runner bean
Procedia PDF Downloads 1419117 Numerical Simulation of Convective and Transport Processes in the Nocturnal Atmospheric Surface Layer
Authors: K. R. Sreenivas, Shaurya Kaushal
Abstract:
After sunset, under calm & clear-sky nocturnal conditions, the air layer near the surface containing aerosols cools through radiative processes to the upper atmosphere. Due to this cooling, surface air-layer temperature can fall 2-6 degrees C lower than the ground-surface temperature. This unstable convection layer, on the top, is capped by a stable inversion-boundary layer. Radiative divergence, along with the convection within the surface layer, governs the vertical transport of heat and moisture. Micro-physics in this layer have implications for the occurrence and growth of the fog layer. This particular configuration, featuring a convective mixed layer beneath a stably stratified inversion layer, exemplifies a classic case of penetrative convection. In this study, we conduct numerical simulations of the penetrative convection phenomenon within the nocturnal atmospheric surface layer and elucidate its relevance to the dynamics of fog layers. We employ field and laboratory measurements of aerosol number density to model the strength of the radiative cooling. Our analysis encompasses horizontally averaged, vertical profiles of temperature, density, and heat flux. The energetic incursion of the air from the mixed layer into the stable inversion layer across the interface results in entrainment and the growth of the mixed layer, modeling of which is the key focus of our investigation. In our research, we ascertain the appropriate length scale to employ in the Richardson number correlation, which allows us to estimate the entrainment rate and model the growth of the mixed layer. Our analysis of the mixed layer and the entrainment zone reveals a close alignment with previously reported laboratory experiments on penetrative convection. Additionally, we demonstrate how aerosol number density influences the growth or decay of the mixed layer. Furthermore, our study suggests that the presence of fog near the ground surface can induce extensive vertical mixing, a phenomenon observed in field experiments.Keywords: inversion layer, penetrative convection, radiative cooling, fog occurrence
Procedia PDF Downloads 739116 Thermal Hydraulic Analysis of Sub-Channels of Pressurized Water Reactors with Hexagonal Array: A Numerical Approach
Authors: Md. Asif Ullah, M. A. R. Sarkar
Abstract:
This paper illustrates 2-D and 3-D simulations of sub-channels of a Pressurized Water Reactor (PWR) having hexagonal array of fuel rods. At a steady state, the temperature of outer surface of the cladding of fuel rod is kept about 1200°C. The temperature of this isothermal surface is taken as boundary condition for simulation. Water with temperature of 290°C is given as a coolant inlet to the primary water circuit which is pressurized upto 157 bar. Turbulent flow of pressurized water is used for heat removal. In 2-D model, temperature, velocity, pressure and Nusselt number distributions are simulated in a vertical sectional plane through the sub-channels of a hexagonal fuel rod assembly. Temperature, Nusselt number and Y-component of convective heat flux along a line in this plane near the end of fuel rods are plotted for different Reynold’s number. A comparison between X-component and Y-component of convective heat flux in this vertical plane is analyzed. Hexagonal fuel rod assembly has three types of sub-channels according to geometrical shape whose boundary conditions are different too. In 3-D model, temperature, velocity, pressure, Nusselt number, total heat flux magnitude distributions for all the three sub-channels are studied for a suitable Reynold’s number. A horizontal sectional plane is taken from each of the three sub-channels to study temperature, velocity, pressure, Nusselt number and convective heat flux distribution in it. Greater values of temperature, Nusselt number and Y-component of convective heat flux are found for greater Reynold’s number. X-component of convective heat flux is found to be non-zero near the bottom of fuel rod and zero near the end of fuel rod. This indicates that the convective heat transfer occurs totally along the direction of flow near the outlet. As, length to radius ratio of sub-channels is very high, simulation for a short length of the sub-channels are done for graphical interface advantage. For the simulations, Turbulent Flow (K-Є ) module and Heat Transfer in Fluids (ht) module of COMSOL MULTIPHYSICS 5.0 are used.Keywords: sub-channels, Reynold’s number, Nusselt number, convective heat transfer
Procedia PDF Downloads 3619115 Determination of Stress-Strain Curve of Duplex Stainless Steel Welds
Authors: Carolina Payares-Asprino
Abstract:
Dual-phase duplex stainless steel comprised of ferrite and austenite has shown high strength and corrosion resistance in many aggressive environments. Joining duplex alloys is challenging due to several embrittling precipitates and metallurgical changes during the welding process. The welding parameters strongly influence the quality of a weld joint. Therefore, it is necessary to quantify the weld bead’s integral properties as a function of welding parameters, especially when part of the weld bead is removed through a machining process due to aesthetic reasons or to couple the elements in the in-service structure. The present study uses the existing stress-strain model to predict the stress-strain curves for duplex stainless-steel welds under different welding conditions. Having mathematical expressions that predict the shape of the stress-strain curve is advantageous since it reduces the experimental work in obtaining the tensile test. In analysis and design, such stress-strain modeling simplifies the time of operations by being integrated into calculation tools, such as the finite element program codes. The elastic zone and the plastic zone of the curve can be defined by specific parameters, generating expressions that simulate the curve with great precision. There are empirical equations that describe the stress-strain curves. However, they only refer to the stress-strain curve for the stainless steel, but not when the material is under the welding process. It is a significant contribution to the applications of duplex stainless steel welds. For this study, a 3x3 matrix with a low, medium, and high level for each of the welding parameters were applied, giving a total of 27 weld bead plates. Two tensile specimens were manufactured from each welded plate, resulting in 54 tensile specimens for testing. When evaluating the four models used to predict the stress-strain curve in the welded specimens, only one model (Rasmussen) presented a good correlation in predicting the strain stress curve.Keywords: duplex stainless steels, modeling, stress-stress curve, tensile test, welding
Procedia PDF Downloads 1689114 Numerical Analysis of Gas-Particle Mixtures through Pipelines
Authors: G. Judakova, M. Bause
Abstract:
The ability to model and simulate numerically natural gas flow in pipelines has become of high importance for the design of pipeline systems. The understanding of the formation of hydrate particles and their dynamical behavior is of particular interest, since these processes govern the operation properties of the systems and are responsible for system failures by clogging of the pipelines under certain conditions. Mathematically, natural gas flow can be described by multiphase flow models. Using the two-fluid modeling approach, the gas phase is modeled by the compressible Euler equations and the particle phase is modeled by the pressureless Euler equations. The numerical simulation of compressible multiphase flows is an important research topic. It is well known that for nonlinear fluxes, even for smooth initial data, discontinuities in the solution are likely to occur in finite time. They are called shock waves or contact discontinuities. For hyperbolic and singularly perturbed parabolic equations the standard application of the Galerkin finite element method (FEM) leads to spurious oscillations (e.g. Gibb's phenomenon). In our approach, we use stabilized FEM, the streamline upwind Petrov-Galerkin (SUPG) method, where artificial diffusion acting only in the direction of the streamlines and using a special treatment of the boundary conditions in inviscid convective terms, is added. Numerical experiments show that the numerical solution obtained and stabilized by SUPG captures discontinuities or steep gradients of the exact solution in layers. However, within this layer the approximate solution may still exhibit overshoots or undershoots. To suitably reduce these artifacts we add a discontinuity capturing or shock capturing term. The performance properties of our numerical scheme are illustrated for two-phase flow problem.Keywords: two-phase flow, gas-particle mixture, inviscid two-fluid model, euler equation, finite element method, streamline upwind petrov-galerkin, shock capturing
Procedia PDF Downloads 3129113 An Inventory Management Model to Manage the Stock Level for Irregular Demand Items
Authors: Riccardo Patriarca, Giulio Di Gravio, Francesco Costantino, Massimo Tronci
Abstract:
An accurate inventory management policy acquires a crucial role in the several high-availability sectors. In these sectors, due to the high-cost of spares and backorders, an (S-1, S) replenishment policy is necessary for high-availability items. The policy enables the shipment of a substitute efficient item anytime the inventory size decreases by one. This policy can be modelled following the Multi-Echelon Technique for Recoverable Item Control (METRIC). The METRIC is a system-based technique that allows defining the optimum stock level in a multi-echelon network, adopting measures in line with the decision-maker’s perspective. The METRIC defines an availability-cost function with inventory costs and required service levels, using as inputs data about the demand trend, the supplying and maintenance characteristics of the network and the budget/availability constraints. The traditional METRIC relies on the hypothesis that a Poisson distribution well represents the demand distribution in case of items with a low failure rate. However, in this research, we will explore the effects of using a Poisson distribution to model the demand of low failure rate items characterized by an irregular demand trend. This characteristic of a demand is not included in the traditional METRIC formulation leading to the need of revising its traditional formulation. Using the CV (Coefficient of Variation) and ADI (Average inter-Demand Interval) classification, we will define the inherent flaws of Poisson-based METRIC for irregular demand items, defining an innovative ad hoc distribution which can better fit the irregular demands. This distribution will allow defining proper stock levels to reduce stocking and backorder costs due to the high irregularities in the demand trend. A case study in the aviation domain will clarify the benefits of this innovative METRIC approach.Keywords: METRIC, inventory management, irregular demand, spare parts
Procedia PDF Downloads 3499112 Statistical Analysis to Compare between Smart City and Traditional Housing
Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh
Abstract:
Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and designKeywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving
Procedia PDF Downloads 1149111 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach
Authors: Aboulkacem El Mehdi
Abstract:
We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics
Procedia PDF Downloads 2949110 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells
Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez
Abstract:
Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation
Procedia PDF Downloads 2519109 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur
Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh
Abstract:
Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.Keywords: hanging, channelling, blast furnace, coke
Procedia PDF Downloads 1979108 Law, Resistance, and Development in Georgia: A Case of Namakhvani HPP
Authors: Konstantine Eristavi
Abstract:
The paper will contribute to the discussion on the pitfalls, limits, and possibilities of legal and rights discourse in opposing large infrastructural projects in the context of neoliberal globalisation. To this end, the paper will analyse the struggle against the Namakhvani HPP project in Georgia. The latter has been hailed by the government as one of the largest energy projects in the history of the country, with an enormous potential impact on energy security, energy independence, economic growth, and development. This takes place against the backdrop of decades of market-led -or neoliberal- model of development in Georgia, characterised by structural adjustments, deregulation, privatisation, and Laissez-Fair approach to foreign investment. In this context, the Georgian state vies with other low and middle-income countries for foreign capital by offering to potential investors, on the one hand, exemptions from social and environmental regulations and, on the other hand, huge legal concessions and safeguards, thereby participating in what is often called a “race to the bottom.” The Namakhvani project is a good example of this. At every stage, the project has been marred with violations of laws and regulations concerning transparency, participation, social and environmental regulations, and so on. Moreover, the leaked contract between the state and the developer reveals the contractual safeguards which effectively insulate the investment throughout the duration of the contract from the changes in the national law that might adversely affect investors’ rights and returns. These clauses, aimed at preserving investors' economic position, place the contract above national law in many respects and even conflict with fundamental constitutional rights. In response to the perceived deficiencies of the project, one of the largest and most diverse social movements in the history of post-soviet Georgia has been assembled, consisting of the local population, conservative and leftist groups, human rights and environmental NGOs, etc. Crucially, the resistance movement is actively using legal tools. In order to analyse both the limitations and possibilities of legal discourse, the paper will distinguish between internal and immanent critiques. Law as internal critique, in the context of the struggles around the Namakhvani project, while potentially fruitful in hindering the project, risks neglecting and reproducing those factors -e.g., the particular model of development- that made such contractual concessions and safeguards and concomitant rights violations possible in the first place. On the other hand, the use of rights and law as part of immanent critique articulates a certain incapacity on the part of the addressee government to uphold existing laws and rights due to structural factors, hence, pointing to a need for a fundamental change. This 'ruptural' form of legal discourse that the movement employs makes it possible to go beyond the discussion around the breaches of law and enables a critical deliberation on the development model within which these violations and extraordinary contractual safeguards become necessary. It will be argued that it is this form of immanent critique that expresses the emancipatory potential of legal discourse.Keywords: law, resistance, development, rights
Procedia PDF Downloads 819107 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 2529106 Women’s Financial Literacy and Family Financial Fragility
Authors: Pepur Sandra, Bulog Ivana, Rimac Smiljanić Ana
Abstract:
During the COVID-19 pandemic, stress and family financial fragility arose worldwide. Economic and health uncertainty created new pressure on the everyday life of families. The work from home, homeschooling, and care of other family members caused an increase in unpaid work and generated a new division of intrahousehold. As many times before, women have taken the higher burden. This paper analyzes family stress and finance during the COVID-19 pandemic. We propose that women's inclusion in paid and unpaid work and their financial literacy influence family finances. We build up our assumptions according to the two theories that explain intrahousehold family decision-making: traditional and barging models. The traditional model assumes that partners specialize in their roles in line with time availability. Consequently, partners less engaged in payable working activities will spend more time on domestic activities and vice versa. According to the bargaining model, each individual has their preferences, and the one with more household bargaining power, e.g., higher income, higher level of education, better employment, or higher financial knowledge, is likely to make family decisions and avoid unpaid work. Our results are based on an anonymous and voluntary survey of 869 valid responses from women older than 18 conducted in Croatia at the beginning of 2021. We found that families who experienced delays in settling current obligations before the pandemic were in a worse financial situation during the pandemic. However, all families reported problems settling current obligations during pandemic times regardless of their financial condition before the crisis. Women from families with financial issues reported higher levels of family and personal stress during the pandemic. Furthermore, we provide evidence that more women's unpaid work negatively affects the family's financial fragility during the pandemic. In addition, in families where women have better financial literacy and are more financially independent, families cope better with finance before and during pandemics.Keywords: family financial fragility, stress, unpaid work, women's financial literacy
Procedia PDF Downloads 819105 Aerial Survey and 3D Scanning Technology Applied to the Survey of Cultural Heritage of Su-Paiwan, an Aboriginal Settlement, Taiwan
Authors: April Hueimin Lu, Liangj-Ju Yao, Jun-Tin Lin, Susan Siru Liu
Abstract:
This paper discusses the application of aerial survey technology and 3D laser scanning technology in the surveying and mapping work of the settlements and slate houses of the old Taiwanese aborigines. The relics of old Taiwanese aborigines with thousands of history are widely distributed in the deep mountains of Taiwan, with a vast area and inconvenient transportation. When constructing the basic data of cultural assets, it is necessary to apply new technology to carry out efficient and accurate settlement mapping work. In this paper, taking the old Paiwan as an example, the aerial survey of the settlement of about 5 hectares and the 3D laser scanning of a slate house were carried out. The obtained orthophoto image was used as an important basis for drawing the settlement map. This 3D landscape data of topography and buildings derived from the aerial survey is important for subsequent preservation planning as well as building 3D scan provides a more detailed record of architectural forms and materials. The 3D settlement data from the aerial survey can be further applied to the 3D virtual model and animation of the settlement for virtual presentation. The information from the 3D scanning of the slate house can also be used for further digital archives and data queries through network resources. The results of this study show that, in large-scale settlement surveys, aerial surveying technology is used to construct the topography of settlements with buildings and spatial information of landscape, as well as the application of 3D scanning for small-scale records of individual buildings. This application of 3D technology, greatly increasing the efficiency and accuracy of survey and mapping work of aboriginal settlements, is much helpful for further preservation planning and rejuvenation of aboriginal cultural heritage.Keywords: aerial survey, 3D scanning, aboriginal settlement, settlement architecture cluster, ecological landscape area, old Paiwan settlements, slat house, photogrammetry, SfM, MVS), Point cloud, SIFT, DSM, 3D model
Procedia PDF Downloads 1749104 Optimized Passive Heating for Multifamily Dwellings
Authors: Joseph Bostick
Abstract:
A method of decreasing the heating load of HVAC systems in a single-dwelling model of a multifamily building, by controlling movable insulation through the optimization of flux, time, surface incident solar radiation, and temperature thresholds. Simulations are completed using a co-simulation between EnergyPlus and MATLAB as an optimization tool to find optimal control thresholds. Optimization of the control thresholds leads to a significant decrease in total heating energy expenditure.Keywords: energy plus, MATLAB, simulation, energy efficiency
Procedia PDF Downloads 1779103 UEMSD Risk Identification: Case Study
Authors: K. Sekulová, M. Šimon
Abstract:
The article demonstrates on a case study how it is possible to identify MSD risk. It is based on a dissertation risk identification model of occupational diseases formation in relation to the work activity that determines what risk can endanger workers who are exposed to the specific risk factors. It is evaluated based on statistical calculations. These risk factors are main cause of upper-extremities musculoskeletal disorders.Keywords: case study, upper-extremity musculoskeletal disorders, ergonomics, risk identification
Procedia PDF Downloads 5039102 Study on the Influence of Different Lengths of Tunnel High Temperature Zones on Train Aerodynamic Resistance
Authors: Chong Hu, Tiantian Wang, Zhe Li, Ourui Huang, Yichen Pan
Abstract:
When the train is running in a high geothermal tunnel, changes in the temperature field will cause disturbances in the propagation and superposition of pressure waves in the tunnel, which in turn have an effect on the aerodynamic resistance of the train. The aim of this paper is to investigate the effect of the changes in the lengths of the high-temperature zone of the tunnel on the aerodynamic resistance of the train, clarifying the evolution mechanism of aerodynamic resistance of trains in tunnels with high ground temperatures. Firstly, moving model tests of trains passing through wall-heated tunnels were conducted to verify the reliability of the numerical method in this paper. Subsequently, based on the three-dimensional unsteady compressible RANS method and the standard k-ε two-equation turbulence model, the change laws of the average aerodynamic resistance under different high-temperature zone lengths were analyzed, and the influence of frictional resistance and pressure difference resistance on total resistance at different times was discussed. The results show that as the length of the high-temperature zone LH increases, the average aerodynamic resistance of a train running in a tunnel gradually decreases; when LH = 330 m, the aerodynamic resistance can be reduced by 5.7%. At the moment of maximum resistance, the total resistance, differential pressure resistance, and friction resistance all decrease gradually with the increase of LH and then remain basically unchanged. At the moment of the minimum value of resistance, with the increase of LH, the total resistance first increases and then slowly decreases; the differential pressure resistance first increases and then remains unchanged, while the friction resistance first remains unchanged and then gradually decreases, and the ratio of the differential pressure resistance to the total resistance gradually increases with the increase of LH. The results of this paper can provide guidance for scholars who need to investigate the mechanism of aerodynamic resistance change of trains in high geothermal environments, as well as provide a new way of thinking for resistance reduction in non-high geothermal tunnels.Keywords: high-speed trains, aerodynamic resistance, high-ground temperature, tunnel
Procedia PDF Downloads 709101 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model
Authors: Danjuma Bawa
Abstract:
This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics
Procedia PDF Downloads 1489100 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings
Authors: Nadine Maier, Martin Mensinger, Enea Tallushi
Abstract:
In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling
Procedia PDF Downloads 1099099 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain
Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero
Abstract:
Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.Keywords: environmental externalities, intermodal transport, perishable food, transit time
Procedia PDF Downloads 99