Search results for: numerical calculations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4346

Search results for: numerical calculations

566 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers

Authors: Onur Kaya, Halit Bayer

Abstract:

Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.

Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing

Procedia PDF Downloads 246
565 The System-Dynamic Model of Sustainable Development Based on the Energy Flow Analysis Approach

Authors: Inese Trusina, Elita Jermolajeva, Viktors Gopejenko, Viktor Abramov

Abstract:

Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the development of the way to social well-being in the frame of the ecological economics paradigm. The objective of the article is to present the results of the analysis of socio-economic systems in the context of sustainable development using the systems power (energy flows) changes analyzing method and structural Kaldor's model of GDP. In accordance with the principles of life's development and the ecological concept was formalized the tasks of sustainable development of the open, non-equilibrium, stable socio-economic systems were formalized using the energy flows analysis method. The methodology of monitoring sustainable development and level of life were considered during the research of interactions in the system ‘human - society - nature’ and using the theory of a unified system of space-time measurements. Based on the results of the analysis, the time series consumption energy and economic structural model were formulated for the level, degree and tendencies of sustainable development of the system and formalized the conditions of growth, degrowth and stationarity. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. During the research, the authors calculated and used a system of universal indicators of sustainable development in the invariant coordinate system in energy units. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. In the context of the proposed approach and methods, universal sustainable development indicators were calculated as models of development for the USA and China. The calculations used data from the World Bank database for the period from 1960 to 2019. Main results: 1) In accordance with the proposed approach, the heterogeneous energy resources of countries were reduced to universal power units, summarized and expressed as a unified number. 2) The values of universal indicators of the life’s level were obtained and compared with generally accepted similar indicators.3) The system of indicators in accordance with the requirements of sustainable development can be considered as a basis for monitoring development trends. This work can make a significant contribution to overcoming the difficulties of forming socio-economic policy, which is largely due to the lack of information that allows one to have an idea of the course and trends of socio-economic processes. The existing methods for the monitoring of the change do not fully meet this requirement since indicators have different units of measurement from different areas and, as a rule, are the reaction of socio-economic systems to actions already taken and, moreover, with a time shift. Currently, the inconsistency or inconsistency of measures of heterogeneous social, economic, environmental, and other systems is the reason that social systems are managed in isolation from the general laws of living systems, which can ultimately lead to a systemic crisis.

Keywords: sustainability, system dynamic, power, energy flows, development

Procedia PDF Downloads 58
564 Geological and Geotechnical Investigation of a Landslide Prone Slope Along Koraput- Rayagada Railway Track Odisha, India: A Case Study

Authors: S. P. Pradhan, Amulya Ratna Roul

Abstract:

A number of landslides are occurring during the rainy season along Rayagada-Koraput Railway track for past three years. The track was constructed about 20 years ago. However, the protection measures are not able to control the recurring slope failures now. It leads to a loss to Indian Railway and its passengers ultimately leading to wastage of time and money. The slopes along Rayagada-Koraput track include both rock and soil slopes. The rock types include mainly Khondalite and Charnockite whereas soil slopes are mainly composed of laterite ranging from less weathered to highly weathered laterite. The field studies were carried out in one of the critical slope. Field study was followed by the kinematic analysis to assess the type of failure. Slake Durability test, Uniaxial Compression test, specific gravity test and triaxial test were done on rock samples to calculate and assess properties such as weathering index, unconfined compressive strength, density, cohesion, and friction angle. Following all the laboratory tests, rock mass rating was calculated. Further, from Kinematic analysis and Rock Mass Ratingbasic, Slope Mass Rating was proposed for each slope. The properties obtained were used to do the slope stability simulations using finite element method based modelling. After all the results, suitable protection measures, to prevent the loss due to slope failure, were suggested using the relation between Slope Mass Rating and protection measures.

Keywords: landslides, slope stability, rock mass rating, slope mass rating, numerical simulation

Procedia PDF Downloads 180
563 Vibration Absorption Strategy for Multi-Frequency Excitation

Authors: Der Chyan Lin

Abstract:

Since the early introduction by Ormondroyd and Den Hartog, vibration absorber (VA) has become one of the most commonly used vibration mitigation strategies. The strategy is most effective for a primary plant subjected to a single frequency excitation. For continuous systems, notable advances in vibration absorption in the multi-frequency system were made. However, the efficacy of the VA strategy for systems under multi-frequency excitation is not well understood. For example, for an N degrees-of-freedom (DOF) primary-absorber system, there are N 'peak' frequencies of large amplitude vibration per every new excitation frequency. In general, the usable range for vibration absorption can be greatly reduced as a result. Frequency modulated harmonic excitation is a commonly seen multi-frequency excitation example: f(t) = cos(ϖ(t)t) where ϖ(t)=ω(1+α sin⁡(δt)). It is known that f(t) has a series expansion given by the Bessel function of the first kind, which implies an infinity of forcing frequencies in the frequency modulated harmonic excitation. For an SDOF system of natural frequency ωₙ subjected to f(t), it can be shown that amplitude peaks emerge at ω₍ₚ,ₖ₎=(ωₙ ± 2kδ)/(α ∓ 1),k∈Z; i.e., there is an infinity of resonant frequencies ω₍ₚ,ₖ₎, k∈Z, making the use of VA strategy ineffective. In this work, we propose an absorber frequency placement strategy for SDOF vibration systems subjected to frequency-modulated excitation. An SDOF linear mass-spring system coupled to lateral absorber systems is used to demonstrate the ideas. Although the mechanical components are linear, the governing equations for the coupled system are nonlinear. We show using N identical absorbers, for N ≫ 1, that (a) there is a cluster of N+1 natural frequencies around every natural absorber frequency, and (b) the absorber frequencies can be moved away from the plant's resonance frequency (ω₀) as N increases. Moreover, we also show the bandwidth of the VA performance increases with N. The derivations of the clustering and bandwidth widening effect will be given, and the superiority of the proposed strategy will be demonstrated via numerical experiments.

Keywords: Bessel function, bandwidth, frequency modulated excitation, vibration absorber

Procedia PDF Downloads 151
562 Vibration Analysis of Stepped Nanoarches with Defects

Authors: Jaan Lellep, Shahid Mubasshar

Abstract:

A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.

Keywords: crack, nanoarches, natural frequency, step

Procedia PDF Downloads 127
561 Numerical Investigation of the Transverse Instability in Radiation Pressure Acceleration

Authors: F. Q. Shao, W. Q. Wang, Y. Yin, T. P. Yu, D. B. Zou, J. M. Ouyang

Abstract:

The Radiation Pressure Acceleration (RPA) mechanism is very promising in laser-driven ion acceleration because of high laser-ion energy conversion efficiency. Although some experiments have shown the characteristics of RPA, the energy of ions is quite limited. The ion energy obtained in experiments is only several MeV/u, which is much lower than theoretical prediction. One possible limiting factor is the transverse instability incited in the RPA process. The transverse instability is basically considered as the Rayleigh-Taylor (RT) instability, which is a kind of interfacial instability and occurs when a light fluid pushes against a heavy fluid. Multi-dimensional particle-in-cell (PIC) simulations show that the onset of transverse instability will destroy the acceleration process and broaden the energy spectrum of fast ions during the RPA dominant ion acceleration processes. The evidence of the RT instability driven by radiation pressure has been observed in a laser-foil interaction experiment in a typical RPA regime, and the dominant scale of RT instability is close to the laser wavelength. The development of transverse instability in the radiation-pressure-acceleration dominant laser-foil interaction is numerically examined by two-dimensional particle-in-cell simulations. When a laser interacts with a foil with modulated surface, the internal instability is quickly incited and it develops. The linear growth and saturation of the transverse instability are observed, and the growth rate is numerically diagnosed. In order to optimize interaction parameters, a method of information entropy is put forward to describe the chaotic degree of the transverse instability. With moderate modulation, the transverse instability shows a low chaotic degree and a quasi-monoenergetic proton beam is produced.

Keywords: information entropy, radiation pressure acceleration, Rayleigh-Taylor instability, transverse instability

Procedia PDF Downloads 343
560 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column

Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan

Abstract:

Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.

Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill

Procedia PDF Downloads 73
559 Analysis of Vortex-Induced Vibration Characteristics for a Three-Dimensional Flexible Tube

Authors: Zhipeng Feng, Huanhuan Qi, Pingchuan Shen, Fenggang Zang, Yixiong Zhang

Abstract:

Numerical simulations of vortex-induced vibration of a three-dimensional flexible tube under uniform turbulent flow are calculated when Reynolds number is 1.35×104. In order to achieve the vortex-induced vibration, the three-dimensional unsteady, viscous, incompressible Navier-Stokes equation and LES turbulence model are solved with the finite volume approach, the tube is discretized according to the finite element theory, and its dynamic equilibrium equations are solved by the Newmark method. The fluid-tube interaction is realized by utilizing the diffusion-based smooth dynamic mesh method. Considering the vortex-induced vibration system, the variety trends of lift coefficient, drag coefficient, displacement, vertex shedding frequency, phase difference angle of tube are analyzed under different frequency ratios. The nonlinear phenomena of locked-in, phase-switch are captured successfully. Meanwhile, the limit cycle and bifurcation of lift coefficient and displacement are analyzed by using trajectory, phase portrait, and Poincaré sections. The results reveal that: when drag coefficient reaches its minimum value, the transverse amplitude reaches its maximum, and the “lock-in” begins simultaneously. In the range of lock-in, amplitude decreases gradually with increasing of frequency ratio. When lift coefficient reaches its minimum value, the phase difference undergoes a suddenly change from the “out-of-phase” to the “in-phase” mode.

Keywords: vortex induced vibration, limit cycle, LES, CFD, FEM

Procedia PDF Downloads 281
558 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors

Authors: Sudhir Kumar Singh, Debashish Chakravarty

Abstract:

Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.

Keywords: finite element method, geotechnical engineering, machine learning, slope stability

Procedia PDF Downloads 99
557 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 37
556 Characteristics of Double-Stator Inner-Rotor Axial Flux Permanent Magnet Machine with Rotor Eccentricity

Authors: Dawoon Choi, Jian Li, Yunhyun Cho

Abstract:

Axial Flux Permanent Magnet (AFPM) machines have been widely used in various applications due to their important merits, such as compact structure, high efficiency and high torque density. This paper presents one of the most important characteristics in the design process of the AFPM device, which is a recent issue. To design AFPM machine, the predicting electromagnetic forces between the permanent magnets and stator is important. Because of the magnitude of electromagnetic force affects many characteristics such as machine size, noise, vibration, and quality of output power. Theoretically, this force is canceled by the equilibrium of force when it is in the middle of the gap, but it is inevitable to deviate due to manufacturing problems in actual machine. Such as large scale wind generator, because of the huge attractive force between rotor and stator disks, this is more serious in getting large power applications such as large. This paper represents the characteristics of Double-Stator Inner –Rotor AFPM machines when it has rotor eccentricity. And, unbalanced air-gap and inclined air-gap condition which is caused by rotor offset and tilt in a double-stator single inner-rotor AFPM machine are each studied in electromagnetic and mechanical aspects. The output voltage and cogging torque under un-normal air-gap condition of AF machines are firstly calculated using a combined analytical and numerical methods, followed by a structure analysis to study the effect to mechanical stress, deformation and bending forces on bearings. Results and conclusions given in this paper are instructive for the successful development of AFPM machines.

Keywords: axial flux permanent magnet machine, inclined air gap, unbalanced air gap, rotor eccentricity

Procedia PDF Downloads 218
555 Blended Learning in a Mathematics Classroom: A Focus in Khan Academy

Authors: Sibawu Witness Siyepu

Abstract:

This study explores the effects of instructional design using blended learning in the learning of radian measures among Engineering students. Blended learning is an education programme that combines online digital media with traditional classroom methods. It requires the physical presence of both lecturer and student in a mathematics computer laboratory. Blended learning provides element of class control over time, place, path or pace. The focus was on the use of Khan Academy to supplement traditional classroom interactions. Khan Academy is a non-profit educational organisation created by educator Salman Khan with a goal of creating an accessible place for students to learn through watching videos in a computer assisted computer. The researcher who is an also lecturer in mathematics support programme collected data through instructing students to watch Khan Academy videos on radian measures, and by supplying students with traditional classroom activities. Classroom activities entails radian measure activities extracted from the Internet. Students were given an opportunity to engage in class discussions, social interactions and collaborations. These activities necessitated students to write formative assessments tests. The purpose of formative assessments tests was to find out about the students’ understanding of radian measures, including errors and misconceptions they displayed in their calculations. Identification of errors and misconceptions serve as pointers of students’ weaknesses and strengths in their learning of radian measures. At the end of data collection, semi-structure interviews were administered to a purposefully sampled group to explore their perceptions and feedback regarding the use of blended learning approach in teaching and learning of radian measures. The study employed Algebraic Insight Framework to analyse data collected. Algebraic Insight Framework is a subset of symbol sense which allows a student to correctly enter expressions into a computer assisted systems efficiently. This study offers students opportunities to enter topics and subtopics on radian measures into a computer through the lens of Khan Academy. Khan academy demonstrates procedures followed to reach solutions of mathematical problems. The researcher performed the task of explaining mathematical concepts and facilitated the process of reinvention of rules and formulae in the learning of radian measures. Lastly, activities that reinforce students’ understanding of radian were distributed. Results showed that this study enthused the students in their learning of radian measures. Learning through videos prompted the students to ask questions which brought about clarity and sense making to the classroom discussions. Data revealed that sense making through reinvention of rules and formulae assisted the students in enhancing their learning of radian measures. This study recommends the use of Khan Academy in blended learning to be introduced as a socialisation programme to all first year students. This will prepare students that are computer illiterate to become conversant with the use of Khan Academy as a powerful tool in the learning of mathematics. Khan Academy is a key technological tool that is pivotal for the development of students’ autonomy in the learning of mathematics and that promotes collaboration with lecturers and peers.

Keywords: algebraic insight framework, blended learning, Khan Academy, radian measures

Procedia PDF Downloads 307
554 Causes for the Precession of the Perihelion in the Planetary Orbits

Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim

Abstract:

It is Leverrier that discovered the precession of the perihelion in the planetary orbits for the first time in the world, while it is Einstein that explained the astronomical phenomenom for the first time in the world. The amount of the precession of the perihelion for Einstein’s theory of gravitation has been explained by means of the inverse fourth power force(inverse third power potential) introduced totheory of gravitation through Schwarzschild metric However, the methodology has a serious shortcoming that it is impossible to explain the cause for the precession of the perihelion in the planetary orbits. According to our study, without taking the cause for the precession of the perihelion, 6 methods can explain the amount of the precession of the perihelion discovered by Leverrier. Therefore, the problem of what caused the perihelion to precess in the planetary orbits must be solved for physics because it is a profound scientific and technological problem for a basic experiment in construction of relativistic theory of gravitation. The scientific solution to the problem proved that Einstein’s explanation for the planetary orbits is a magic made by the numerical expressions obtained from fictitious gravitation introduced to theory of gravitation and wrong definition of proper time The problem of the precession of the perihelion seems solved already by means of general theory of relativity, but, in essence, the cause for the astronomical phenomenon has not been successfully explained for astronomy yet. The right solution to the problem comes from generalized theory of gravitation. Therefore, in this paper, it has been shown that by means of Schwarzschild field and the physical quantities of relativistic Lagrangian redflected in it, fictitious gravitation is not the main factor which can cause the perihelion to precess in the planetary orbits. In addition to it, it has been shown that the main factor which can cause the perihelion to precess in the planetary orbits is the inverse third power force existing really in the relativistic region in the Solar system.

Keywords: inverse third power force, precession of the perihelion, fictitious gravitation, planetary orbits

Procedia PDF Downloads 9
553 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty

Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos

Abstract:

Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.

Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning

Procedia PDF Downloads 208
552 Blood Flow Simulations to Understand the Role of the Distal Vascular Branches of Carotid Artery in the Stroke Prediction

Authors: Muhsin Kizhisseri, Jorg Schluter, Saleh Gharie

Abstract:

Atherosclerosis is the main reason of stroke, which is one of the deadliest diseases in the world. The carotid artery in the brain is the prominent location for atherosclerotic progression, which hinders the blood flow into the brain. The inclusion of computational fluid dynamics (CFD) into the diagnosis cycle to understand the hemodynamics of the patient-specific carotid artery can give insights into stroke prediction. Realistic outlet boundary conditions are an inevitable part of the numerical simulations, which is one of the major factors in determining the accuracy of the CFD results. The Windkessel model-based outlet boundary conditions can give more realistic characteristics of the distal vascular branches of the carotid artery, such as the resistance to the blood flow and compliance of the distal arterial walls. This study aims to find the most influential distal branches of the carotid artery by using the Windkessel model parameters in the outlet boundary conditions. The parametric study approach to Windkessel model parameters can include the geometrical features of the distal branches, such as radius and length. The incorporation of the variations of the geometrical features of the major distal branches such as the middle cerebral artery, anterior cerebral artery, and ophthalmic artery through the Windkessel model can aid in identifying the most influential distal branch in the carotid artery. The results from this study can help physicians and stroke neurologists to have a more detailed and accurate judgment of the patient's condition.

Keywords: stroke, carotid artery, computational fluid dynamics, patient-specific, Windkessel model, distal vascular branches

Procedia PDF Downloads 212
551 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 361
550 Numerical Simulation of Footing on Reinforced Loose Sand

Authors: M. L. Burnwal, P. Raychowdhury

Abstract:

Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.

Keywords: settlement, shallow foundation, SSI, continuum FEM

Procedia PDF Downloads 193
549 Experimental and Numerical Evaluation of a Shaft Failure Behaviour Using Three-Point Bending Test

Authors: Bernd Engel, Sara Salman Hassan Al-Maeeni

Abstract:

A substantial amount of natural resources are nowadays consumed at a growing rate, as humans all over the world used materials obtained from the Earth. Machinery manufacturing industry is one of the major resource consumers on a global scale. Even though the incessant finding out of the new material, metals, and resources, it is urgent for the industry to develop methods to use the Earth's resources intelligently and more sustainable than before. Re-engineering of machine tools regarding design and failure analysis is an approach whereby out-of-date machines are upgraded and returned to useful life. To ensure the reliable future performance of the used machine components, it is essential to investigate the machine component failure through the material, design, and surface examinations. This paper presents an experimental approach aimed at inspecting the shaft of the rotary draw bending machine as a case to study. The testing methodology, which is based on the principle of the three-point bending test, allows assessing the shaft elastic behavior under loading. Furthermore, the shaft elastic characteristics include the maximum linear deflection, and maximum bending stress was determined by using an analytical approach and finite element (FE) analysis approach. In the end, the results were compared with the ones obtained by the experimental approach. In conclusion, it is seen that the measured bending deflection and bending stress were well close to the permissible design value. Therefore, the shaft can work in the second life cycle. However, based on previous surface tests conducted, the shaft needs surface treatments include re-carburizing and refining processes to ensure the reliable surface performance.

Keywords: deflection, FE analysis, shaft, stress, three-point bending

Procedia PDF Downloads 158
548 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil

Authors: M. Seguini, D. Nedjar

Abstract:

An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.

Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability

Procedia PDF Downloads 412
547 Theoretical and Experimental Investigation of Structural, Electrical and Photocatalytic Properties of K₀.₅Na₀.₅NbO₃ Lead- Free Ceramics Prepared via Different Synthesis Routes

Authors: Manish Saha, Manish Kumar Niranjan, Saket Asthana

Abstract:

The K₀.₅Na₀.₅NbO₃ (KNN) system has emerged as one of the most promising lead-free piezoelectric over the years. In this work, we perform a comprehensive investigation of electronic structure, lattice dynamics and dielectric/ferroelectric properties of the room temperature phase of KNN by combining ab-initio DFT-based theoretical analysis and experimental characterization. We assign the symmetry labels to KNN vibrational modes and obtain ab-initio polarized Raman spectra, Infrared (IR) reflectivity, Born-effective charge tensors, oscillator strengths etc. The computed Raman spectrum is found to agree well with the experimental spectrum. In particular, the results suggest that the mode in the range ~840-870 cm-¹ reported in the experimental studies is longitudinal optical (LO) with A_1 symmetry. The Raman mode intensities are calculated for different light polarization set-ups, which suggests the observation of different symmetry modes in different polarization set-ups. The electronic structure of KNN is investigated, and an optical absorption spectrum is obtained. Further, the performances of DFT semi-local, metal-GGA and hybrid exchange-correlations (XC) functionals, in the estimation of KNN band gaps are investigated. The KNN bandgap computed using GGA-1/2 and HSE06 hybrid functional schemes are found to be in excellant agreement with the experimental value. The COHP, electron localization function and Bader charge analysis is also performed to deduce the nature of chemical bonding in the KNN. The solid-state reaction and hydrothermal methods are used to prepare the KNN ceramics, and the effects of grain size on the physical characteristics these ceramics are examined. A comprehensive study on the impact of different synthesis techniques on the structural, electrical, and photocatalytic properties of ferroelectric ceramics KNN. The KNN-S prepared by solid-state method have significantly larger grain size as compared to that for KNN-H prepared by hydrothermal method. Furthermore, the KNN-S is found to exhibit higher dielectric, piezoelectric and ferroelectric properties as compared to KNN-H. On the other hand, the increased photocatalytic activity is observed in KNN-H as compared to KNN-S. As compared to the hydrothermal synthesis, the solid-state synthesis causes an increase in the relative dielectric permittivity (ε^') from 2394 to 3286, remnant polarization (P_r) from 15.38 to 20.41 μC/cm^², planer electromechanical coupling factor (k_p) from 0.19 to 0.28 and piezoelectric coefficient (d_33) from 88 to 125 pC/N. The KNN-S ceramics are also found to have a lower leakage current density, and higher grain resistance than KNN-H ceramic. The enhanced photocatalytic activity of KNN-H is attributed to relatively smaller particle sizes. The KNN-S and KNN-H samples are found to have degradation efficiencies of RhB solution of 20% and 65%, respectively. The experimental study highlights the importance of synthesis methods and how these can be exploited to tailor the dielectric, piezoelectric and photocatalytic properties of KNN. Overall, our study provides several bench-mark important results on KNN that have not been reported so far.

Keywords: lead-free piezoelectric, Raman intensity spectrum, electronic structure, first-principles calculations, solid state synthesis, photocatalysis, hydrothermal synthesis

Procedia PDF Downloads 47
546 Estimation Atmospheric parameters for Weather Study and Forecast over Equatorial Regions Using Ground-Based Global Position System

Authors: Asmamaw Yehun, Tsegaye Kassa, Addisu Hunegnaw, Martin Vermeer

Abstract:

There are various models to estimate the neutral atmospheric parameter values, such as in-suite and reanalysis datasets from numerical models. Accurate estimated values of the atmospheric parameters are useful for weather forecasting and, climate modeling and monitoring of climate change. Recently, Global Navigation Satellite System (GNSS) measurements have been applied for atmospheric sounding due to its robust data quality and wide horizontal and vertical coverage. The Global Positioning System (GPS) solutions that includes tropospheric parameters constitute a reliable set of data to be assimilated into climate models. The objective of this paper is, to estimate the neutral atmospheric parameters such as Wet Zenith Delay (WZD), Precipitable Water Vapour (PWV) and Total Zenith Delay (TZD) using six selected GPS stations in the equatorial regions, more precisely, the Ethiopian GPS stations from 2012 to 2015 observational data. Based on historic estimated GPS-derived values of PWV, we forecasted the PWV from 2015 to 2030. During data processing and analysis, we applied GAMIT-GLOBK software packages to estimate the atmospheric parameters. In the result, we found that the annual averaged minimum values of PWV are 9.72 mm for IISC and maximum 50.37 mm for BJCO stations. The annual averaged minimum values of WZD are 6 cm for IISC and maximum 31 cm for BDMT stations. In the long series of observations (from 2012 to 2015), we also found that there is a trend and cyclic patterns of WZD, PWV and TZD for all stations.

Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour

Procedia PDF Downloads 58
545 Finite Element Model to Evaluate Gas Conning Phenomenon in Naturally Fractured Oil Reservoirs

Authors: Reda Abdel Azim

Abstract:

Gas conning phenomenon considered one of the prevalent matter in oil field applications as it significantly affects the amount of produced oil, increase cost of production operation and it has a direct effect on oil reservoirs recovery efficiency as well. Therefore, evaluation of such phenomenon and study the reservoir mechanisms that may strongly affect invading gas to the producing formation is crucial. Gas conning is a result of an imbalance between two major forces controlling the oil production: gravitational and viscous forces especially in naturally fractured reservoirs where the capillary pressure forces are negligible. Once the gas invading the producing formation near the wellbore due to large producing oil rate, the oil gas contact will change and such reservoirs are prone to gas conning. Moreover, the oil volume expected to be produced requires the use of long horizontal perforated well. This work presents a numerical simulation study to predict and propose solutions to gas coning in naturally fractured oil reservoirs. The simulation work is based on discrete fractures and permeability tensors approaches. The governing equations are discretized using finite element approach and Galerkin’s least square technique (GLS) is employed to stabilize the equation solutions. The developed simulator is validated against Eclipse-100 using horizontal fractures. The matrix and fracture properties are modelled. Critical rate, breakthrough time and GOR are determined to be used in investigation of the effect of matrix and fracture properties on gas coning. Results show that fracture distribution in terms of diverse dip and azimuth has a great effect on conning occurring. In addition, fracture porosity, anisotropy ratio, and fracture aperture.

Keywords: gas conning, finite element, fractured reservoirs, multiphase

Procedia PDF Downloads 194
544 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 452
543 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation

Authors: A. Bensaid, T. Mostephaoui, R. Nedjai

Abstract:

A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.

Keywords: land development, GIS, segmentation, remote sensing

Procedia PDF Downloads 152
542 Asia Pacific University of Technology and Innovation

Authors: Esther O. Adebitan, Florence Oyelade

Abstract:

The Millennium Development Goals (MDGs) was initiated by the UN member nations’ aspiration for the betterment of human life. It is expressed in a set of numerical ‎and time-bound targets. In more recent time, the aspiration is shifting away from just the achievement to the sustainability of achieved MDGs beyond the 2015 target. The main objective of this study was assessing how much the hotel industry within the Nigerian Federal Capital Territory (FCT) as a member of the global community is involved in the achievement of sustainable MDGs within the FCT. The study had two population groups consisting of 160 hotels and the communities where these are located. Stratified random sampling technique was adopted in selecting 60 hotels based on large, medium ‎and small hotels categorisation, while simple random sampling technique was used to elicit information from 30 residents of three of the hotels host communities. The study was guided by tree research questions and two hypotheses aimed to ascertain if hotels see the need to be involved in, and have policies in pursuit of achieving sustained MDGs, and to determine public opinion regarding hotels contribution towards the achievement of the MDGs in their communities. A 22 item questionnaire was designed ‎and administered to hotel managers while 11 item questionnaire was designed ‎and administered to hotels’ host communities. Frequency distribution and percentage as well as Chi-square were used to analyse data. Results showed no significant involvement of the hotel industry in achieving sustained MDGs in the FCT and that there was disconnect between the hotels and their immediate communities. The study recommended that hotels should, as part of their Corporate Social Responsibility pick at least one of the goals to work on in order to be involved in the attainment of enduring Millennium Development Goals.

Keywords: MDGs, hotels, FCT, host communities, corporate social responsibility

Procedia PDF Downloads 417
541 Combined Effect of Moving and Open Boundary Conditions in the Simulation of Inland Inundation Due to Far Field Tsunami

Authors: M. Ashaque Meah, Md. Fazlul Karim, M. Shah Noor, Nazmun Nahar Papri, M. Khalid Hossen, M. Ismoen

Abstract:

Tsunami and inundation modelling due to far field tsunami propagation in a limited area is a very challenging numerical task because it involves many aspects such as the formation of various types of waves and the irregularities of coastal boundaries. To compute the effect of far field tsunami and extent of inland inundation due to far field tsunami along the coastal belts of west coast of Malaysia and Southern Thailand, a formulated boundary condition and a moving boundary condition are simultaneously used. In this study, a boundary fitted curvilinear grid system is used in order to incorporate the coastal and island boundaries accurately as the boundaries of the model domain are curvilinear in nature and the bending is high. The tsunami response of the event 26 December 2004 along the west open boundary of the model domain is computed to simulate the effect of far field tsunami. Based on the data of the tsunami source at the west open boundary of the model domain, a boundary condition is formulated and applied to simulate the tsunami response along the coastal and island boundaries. During the simulation process, a moving boundary condition is initiated instead of fixed vertical seaside wall. The extent of inland inundation and tsunami propagation pattern are computed. Some comparisons are carried out to test the validation of the simultaneous use of the two boundary conditions. All simulations show excellent agreement with the data of observation.

Keywords: open boundary condition, moving boundary condition, boundary-fitted curvilinear grids, far-field tsunami, shallow water equations, tsunami source, Indonesian tsunami of 2004

Procedia PDF Downloads 445
540 The Relationships between Market Orientation and Competitiveness of Companies in Banking Sector

Authors: Patrik Jangl, Milan Mikuláštík

Abstract:

The objective of the paper is to measure and compare market orientation of Swiss and Czech banks, as well as examine statistically the degree of influence it has on competitiveness of the institutions. The analysis of market orientation is based on the collecting, analysis and correct interpretation of the data. Descriptive analysis of market orientation describe current situation. Research of relation of competitiveness and market orientation in the sector of big international banks is suggested with the expectation of existence of a strong relationship. Partially, the work served as reconfirmation of suitability of classic methodologies to measurement of banks’ market orientation. Two types of data were gathered. Firstly, by measuring subjectively perceived market orientation of a company and secondly, by quantifying its competitiveness. All data were collected from a sample of small, mid-sized and large banks. We used numerical secondary character data from the international statistical financial Bureau Van Dijk’s BANKSCOPE database. Statistical analysis led to the following results. Assuming classical market orientation measures to be scientifically justified, Czech banks are statistically less market-oriented than Swiss banks. Secondly, among small Swiss banks, which are not broadly internationally active, small relationship exist between market orientation measures and market share based competitiveness measures. Thirdly, among all Swiss banks, a strong relationship exists between market orientation measures and market share based competitiveness measures. Above results imply existence of a strong relation of this measure in sector of big international banks. A strong statistical relationship has been proven to exist between market orientation measures and equity/total assets ratio in Switzerland.

Keywords: market orientation, competitiveness, marketing strategy, measurement of market orientation, relation between market orientation and competitiveness, banking sector

Procedia PDF Downloads 474
539 Green Supply Chain Network Optimization with Internet of Things

Authors: Sema Kayapinar, Ismail Karaoglan, Turan Paksoy, Hadi Gokcen

Abstract:

Green Supply Chain Management is gaining growing interest among researchers and supply chain management. The concept of Green Supply Chain Management is to integrate environmental thinking into the Supply Chain Management. It is the systematic concept emphasis on environmental problems such as reduction of greenhouse gas emissions, energy efficiency, recycling end of life products, generation of solid and hazardous waste. This study is to present a green supply chain network model integrated Internet of Things applications. Internet of Things provides to get precise and accurate information of end-of-life product with sensors and systems devices. The forward direction consists of suppliers, plants, distributions centres and sales and collect centres while, the reverse flow includes the sales and collects centres, disassembled centre, recycling and disposal centre. The sales and collection centre sells the new products are transhipped from factory via distribution centre and also receive the end-of life product according their value level. We describe green logistics activities by presenting specific examples including “recycling of the returned products and “reduction of CO2 gas emissions”. The different transportation choices are illustrated between echelons according to their CO2 gas emissions. This problem is formulated as a mixed integer linear programming model to solve the green supply chain problems which are emerged from the environmental awareness and responsibilities. This model is solved by using Gams package program. Numerical examples are suggested to illustrate the efficiency of the proposed model.

Keywords: green supply chain optimization, internet of things, greenhouse gas emission, recycling

Procedia PDF Downloads 327
538 Analysis of Rockfall Hazard along Himalayan Road Cut Slopes

Authors: Sarada Prasad Pradhan, Vikram Vishal, Tariq Siddique

Abstract:

With a vast area of India comprising of hilly terrain and road cut slopes, landslides and rockfalls are a common phenomenon. However, while landslide studies have received much attention in the past in India, very little literature and analysis is available regarding rockfall hazard of many rockfall prone areas, specifically in Uttarakhand Himalaya, India. The subsequent lack of knowledge and understanding of the rockfall phenomenon as well as frequent incidences of rockfall led fatalities urge the necessity of conducting site-specific rockfall studies to highlight the importance of addressing this issue as well as to provide data for safe design of preventive structures. The present study has been conducted across 10 rockfall prone road cut slopes for a distance of 15 km starting from Devprayag, India along National Highway 58 (NH-58). In order to make a qualitative assessment of Rockfall Hazard posed by these slopes, Rockfall Hazard Rating using standards for Indian Rockmass has been conducted at 10 locations under different slope conditions. Moreover, to accurately predict the characteristics of the possible rockfall phenomenon, numerical simulation was carried out to calculate the maximum bounce heights, total kinetic energies, translational velocities and trajectories of the falling rockmass blocks when simulated on each of these slopes according to real-life conditions. As it was observed that varying slope geometry had more fatal impacts on Rockfall hazard than size of rock masses, several optimizations have been suggested for each slope regarding location of barriers and modification of slope geometries in order to minimize damage by falling rocks. This study can be extremely useful in emphasizing the significance of rockfall studies and construction of mitigative barriers and structures along NH-58 around Devprayag.

Keywords: rockfall, slope stability, rockmass, hazard

Procedia PDF Downloads 207
537 Numerical Investigation of the Needle Opening Process in a High Pressure Gas Injector

Authors: Matthias Banholzer, Hagen Müller, Michael Pfitzner

Abstract:

Gas internal combustion engines are widely used as propulsion systems or in power plants to generate heat and electricity. While there are different types of injection methods including the manifold port fuel injection and the direct injection, the latter has more potential to increase the specific power by avoiding air displacement in the intake and to reduce combustion anomalies such as backfire or pre-ignition. During the opening process of the injector, multiple flow regimes occur: subsonic, transonic and supersonic. To cover the wide range of Mach numbers a compressible pressure-based solver is used. While the standard Pressure Implicit with Splitting of Operators (PISO) method is used for the coupling between velocity and pressure, a high-resolution non-oscillatory central scheme established by Kurganov and Tadmor calculates the convective fluxes. A blending function based on the local Mach- and CFL-number switches between the compressible and incompressible regimes of the developed model. As the considered operating points are well above the critical state of the used fluids, the ideal gas assumption is not valid anymore. For the real gas thermodynamics, the models based on the Soave-Redlich-Kwong equation of state were implemented. The caloric properties are corrected using a departure formalism, for the viscosity and the thermal conductivity the empirical correlation of Chung is used. For the injector geometry, the dimensions of a diesel injector were adapted. Simulations were performed using different nozzle and needle geometries and opening curves. It can be clearly seen that there is a significant influence of all three parameters.

Keywords: high pressure gas injection, hybrid solver, hydrogen injection, needle opening process, real-gas thermodynamics

Procedia PDF Downloads 459