Search results for: numerical modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6812

Search results for: numerical modeling

2162 A Crystal Plasticity Approach to Model Dynamic Strain Aging

Authors: Burak Bal, Demircan Canadinc

Abstract:

Dynamic strain aging (DSA), resulting from the reorientation of C-Mn clusters in the core of dislocations, can provide a strain hardening mechanism. In addition, in Hadfield steel, negative strain rate sensitivity is observed due to the DSA. In our study, we incorporated dynamic strain aging onto crystal plasticity computations to predict the local instabilities and corresponding negative strain rate sensitivity. Specifically, the material response of Hadfield steel was obtained from monotonic and strain-rate jump experiments under tensile loading. The strain rate range was adjusted from 10⁻⁴ to 10⁻¹s ⁻¹. The crystal plasticity modeling of the material response was carried out based on Voce-type hardening law and corresponding Voce hardening parameters were determined. The solute pinning effect of carbon atom was incorporated to crystal plasticity simulations at microscale level by computing the shear stress contribution imposed on an arrested dislocation by carbon atom. After crystal plasticity simulations with modifying hardening rule, which takes into account the contribution of DSA, it was seen that the model successfully predicts both the role of DSA and corresponding strain rate sensitivity.

Keywords: crystal plasticity, dynamic strain aging, Hadfield steel, negative strain rate sensitivity

Procedia PDF Downloads 256
2161 Determination of Inflow Performance Relationship for Naturally Fractured Reservoirs: Numerical Simulation Study

Authors: Melissa Ramirez, Mohammad Awal

Abstract:

The Inflow Performance Relationship (IPR) of a well is a relation between the oil production rate and flowing bottom-hole pressure. This relationship is an important tool for petroleum engineers to understand and predict the well performance. In the petroleum industry, IPR correlations are used to design and evaluate well completion, optimizing well production, and designing artificial lift. The most commonly used IPR correlations models are Vogel and Wiggins, these models are applicable to homogeneous and isotropic reservoir data. In this work, a new IPR model is developed to determine inflow performance relationship of oil wells in a naturally fracture reservoir. A 3D black-oil reservoir simulator is used to develop the oil mobility function for the studied reservoir. Based on simulation runs, four flow rates are run to record the oil saturation and calculate the relative permeability for a naturally fractured reservoir. The new method uses the result of a well test analysis along with permeability and pressure-volume-temperature data in the fluid flow equations to obtain the oil mobility function. Comparisons between the new method and two popular correlations for non-fractured reservoirs indicate the necessity for developing and using an IPR correlation specifically developed for a fractured reservoir.

Keywords: inflow performance relationship, mobility function, naturally fractured reservoir, well test analysis

Procedia PDF Downloads 269
2160 Modeling Studies on the Elevated Temperatures Formability of Tube Ends Using RSM

Authors: M. J. Davidson, N. Selvaraj, L. Venugopal

Abstract:

The elevated temperature forming studies on the expansion of thin walled tubes have been studied in the present work. The influence of process parameters namely the die angle, the die ratio and the operating temperatures on the expansion of tube ends at elevated temperatures is carried out. The range of operating parameters have been identified by perfoming extensive simulation studies. The hot forming parameters have been evaluated for AA2014 alloy for performing the simulation studies. Experimental matrix has been developed from the feasible range got from the simulation results. The design of experiments is used for the optimization of process parameters. Response Surface Method’s (RSM) and Box-Behenken design (BBD) is used for developing the mathematical model for expansion. Analysis of variance (ANOVA) is used to analyze the influence of process parameters on the expansion of tube ends. The effect of various process combinations of expansion are analyzed through graphical representations. The developed model is found to be appropriate as the coefficient of determination value is very high and is equal to 0.9726. The predicted values are found to coincide well with the experimental results, within acceptable error limits.

Keywords: expansion, optimization, Response Surface Method (RSM), ANOVA, bbd, residuals, regression, tube

Procedia PDF Downloads 506
2159 Cerebrovascular Modeling: A Vessel Network Approach for Fluid Distribution

Authors: Karla E. Sanchez-Cazares, Kim H. Parker, Jennifer H. Tweedy

Abstract:

The purpose of this work is to develop a simple compartmental model of cerebral fluid balance including blood and cerebrospinal-fluid (CSF). At the first level the cerebral arteries and veins are modelled as bifurcating trees with constant scaling factors between generations which are connected through a homogeneous microcirculation. The arteries and veins are assumed to be non-rigid and the cross-sectional area, resistance and mean pressure in each generation are determined as a function of blood volume flow rate. From the mean pressure and further assumptions about the variation of wall permeability, the transmural fluid flux can be calculated. The results suggest the next level of modelling where the cerebral vasculature is divided into three compartments; the large arteries, the small arteries, the capillaries and the veins with effective compliances and permeabilities derived from the detailed vascular model. These vascular compartments are then linked to other compartments describing the different CSF spaces, the cerebral ventricles and the subarachnoid space. This compartmental model is used to calculate the distribution of fluid in the cranium. Known volumes and flows for normal conditions are used to determine reasonable parameters for the model, which can then be used to help understand pathological behaviour and suggest clinical interventions.

Keywords: cerebrovascular, compartmental model, CSF model, vascular network

Procedia PDF Downloads 271
2158 Sustainable Use of Fresh Groundwater Lens of Pleistocene Aquifer in Nam Dinh, Vietnam

Authors: Tran Thanh Le, Pham Trong Duc

Abstract:

The fresh groundwater lens of the Pleistocene aquifer in Nam Dinh was formed since 12,900 years ago. Currently, the Pleistocene aquifer has been continuously exploited on average of 154,163m3/day, distributed mainly in the districts of Nghia Hung, Hai Hau, a part of Truc Ninh, Y Yen, Nam Truc and Giao Thuy. The groundwater level is still on a declining trend, saltwater intrusion in this freshwater lens can occur if the growth rate in exploitation is maintained. This study focused on groundwater sustainable use by means of 4 groups of criteria including: Groundwater quality and pollution; Aquifers’ productivity and capacity; Environment impacts due to exploitation (groundwater level decline, land subsidence due to water exploitation); Social and economic impacts. Using a combination of methods including field surveys, geophysics, hydrogeochemistry, isotope and numerical models to determine safe groundwater exploitation thresholds for the whole study area has been determined to be 544,314m3/day and the actual exploitation amount is currently about 30% compared to the safe exploitation threshold. However, it should also be noted that the current groundwater exploitation threshold and level of its exploitation compared to the safe exploitation threshold of each locality are not the same. From this result, the groundwater exploitation threshold map of the study area was established to serve the management, licensing and orientation of groundwater exploitation.

Keywords: criteria, groundwater, fresh groundwater lens, pleistocene, Nam Dinh

Procedia PDF Downloads 154
2157 The Impact of Nurse-Physician Interprofessional Relationship on Nurses' Willingness to Engage in Leadership Roles: A Multilevel Modelling Approach

Authors: Sulaiman D. Al Sabei, Amy M. Ross, Christopher S. Lee

Abstract:

Nurse leaders play a fundamental role in transforming healthcare system and improving quality of patient care. Several healthcare organizations have called to increase the number of nurse leaders across all levels and in every practice setting. Identification of factors influencing nurses’ willingness to lead can inform healthcare leaders and policy makers of potentially illuminating strategies for establishing favorable work environments that motivate nurses to engage in leadership roles. The aim of this study was to investigate determinants of nurses’ willingness to engage in future leadership roles. The study was conducted at a public hospital in the Sultanate of Oman. A total of 171 registered nurses participated. A multilevel modeling was conducted. Findings revealed that 80% of nurses were likely to seek out opportunities to engage in leadership roles. The quality of the nurse-physician collegial relationships was a significant predictor of nurses’ willingness to lead. Establishing a work environment’s culture of positive nurse-physician relationships is critical to enhance nurses’ work attitude and engage them in leadership roles.

Keywords: interprofessional relationship, leadership, motivation, nurses

Procedia PDF Downloads 187
2156 A Bivariate Inverse Generalized Exponential Distribution and Its Applications in Dependent Competing Risks Model

Authors: Fatemah A. Alqallaf, Debasis Kundu

Abstract:

The aim of this paper is to introduce a bivariate inverse generalized exponential distribution which has a singular component. The proposed bivariate distribution can be used when the marginals have heavy-tailed distributions, and they have non-monotone hazard functions. Due to the presence of the singular component, it can be used quite effectively when there are ties in the data. Since it has four parameters, it is a very flexible bivariate distribution, and it can be used quite effectively for analyzing various bivariate data sets. Several dependency properties and dependency measures have been obtained. The maximum likelihood estimators cannot be obtained in closed form, and it involves solving a four-dimensional optimization problem. To avoid that, we have proposed to use an EM algorithm, and it involves solving only one non-linear equation at each `E'-step. Hence, the implementation of the proposed EM algorithm is very straight forward in practice. Extensive simulation experiments and the analysis of one data set have been performed. We have observed that the proposed bivariate inverse generalized exponential distribution can be used for modeling dependent competing risks data. One data set has been analyzed to show the effectiveness of the proposed model.

Keywords: Block and Basu bivariate distributions, competing risks, EM algorithm, Marshall-Olkin bivariate exponential distribution, maximum likelihood estimators

Procedia PDF Downloads 137
2155 Interaction of Local, Flexural-Torsional, and Flexural Buckling in Cold-Formed Steel Lipped-Angle Compression Members

Authors: K. C. Kalam Aswathy, M. V. Anil Kumar

Abstract:

The possible failure modes of cold-formed steel (CFS) lipped angle (LA) compression members are yielding, local, flexural-torsional, or flexural buckling, and any possible interaction between these buckling modes. In general, the strength estimated by current design guidelines is conservative for these members when flexural-torsional buckling (FTB) is the first global buckling mode, as the post-buckling strength of this mode is not accounted for in the global buckling strength equations. The initial part of this paper reports the results of an experimental and numerical study of CFS-LA members undergoing independent FTB. The modifications are suggested to global buckling strength equations based on these results. Subsequently, the reduction in the ultimate strength from strength corresponding to independent buckling modes for LA members undergoing interaction between buckling modes such as local-flexural torsional, flexural-flexural torsional, local-flexural, and local-flexural torsional-flexural are studied systematically using finite element analysis results. A simple and more accurate interaction equation that accounts for the above interactions between buckling modes in CFS-LA compression members is proposed.

Keywords: buckling interactions, cold-formed steel, flexural-torsional buckling, lipped angle

Procedia PDF Downloads 79
2154 Quantum Decision Making with Small Sample for Network Monitoring and Control

Authors: Tatsuya Otoshi, Masayuki Murata

Abstract:

With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.

Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm

Procedia PDF Downloads 74
2153 A Lean Manufacturing Profile of Practices in the Metallurgical Industry: A Methodology for Multivariate Analysis

Authors: M. Jonathan D. Morales, R. Ramón Silva

Abstract:

The purpose of this project is to carry out an analysis and determine the profile of actual lean manufacturing processes in the Metropolitan Area of Bucaramanga. Through the analysis of qualitative and quantitative variables it was possible to establish how these manufacturers develop production practices that ensure their competitiveness and productivity in the market. In this study, a random sample of metallurgic and wrought iron companies was applied, following which a quantitative focus and analysis was used to formulate a qualitative methodology for measuring the level of lean manufacturing procedures in the industry. A qualitative evaluation was also carried out through a multivariate analysis using the Numerical Taxonomy System (NTSYS) program which should allow for the determination of Lean Manufacturing profiles. Through the results it was possible to observe how the companies in the sector are doing with respect to Lean Manufacturing Practices, as well as identify the level of management that these companies practice with respect to this topic. In addition, it was possible to ascertain that there is no one dominant profile in the sector when it comes to Lean Manufacturing. It was established that the companies in the metallurgic and wrought iron industry show low levels of Lean Manufacturing implementation. Each one carries out diverse actions that are insufficient to consolidate a sectoral strategy for developing a competitive advantage which enables them to tie together a production strategy.

Keywords: production line management, metallurgic industry, lean manufacturing, productivity

Procedia PDF Downloads 454
2152 Developing a Mathematical Model for Trade-Off Analysis of New Green Products

Authors: M. R. Gholizadeh, N. Bhuiyan, M. Salari

Abstract:

In the near future, companies will be increasingly forced to shift their activities along a new road in order to decrease the harmful effects of their design, production and after-life on our environment. Products must meet environmental standards to not only prevent penalties but to consider the sustainability for future generations. However, the most important factor that companies will face is selecting a reasonable strategy to maximize their profit. Thus, companies need to have precise forecast from their profit after design stage through Trade-off analysis. This paper is an attempt to introduce a mathematical model that considers effective factors that impact the total profit when products are designed for resource and energy efficiency or recyclability. The modification is according to different strategies based on a Cost-Volume-Profit model. Here, the cost structure consists of Recycling cost, Development cost, Ramp-up cost, Production cost, and Pollution cost. Also, the model shows the effect of implementation of design for recyclable on revenue structure through revenue of used parts and revenue of recycled materials. A numerical example is used to evaluate the proposed model. Results show that fulfillment of Green Product Development not only can reduce the environmental impact of products but also it will increase profit of company in long term.

Keywords: green product, design for environment, C-V-P model, trade-off analysis

Procedia PDF Downloads 311
2151 Experimental Investigation on the Mechanical Behaviour of Three-Leaf Masonry Walls under In-Plane Loading

Authors: Osama Amer, Yaser Abdel-Aty, Mohamed Abd El Hady

Abstract:

The present paper illustrates an experimental approach to provide understanding of the mechanical behavior and failure mechanisms of different typologies of unreinforced three-leaf masonry walls of historical Islamic architectural heritage in Egypt. The main objective of this study is to investigate the propagation of possible cracking, ultimate load, deformations and failure mechanisms. Experimental data on interface-shear and compression tests on large scale three-leaf masonry wallets are provided. The wallets were built basically of Egyptian limestone and modified lime mortar. External wallets were built of stone blocks while the inner leaf was built of rubble limestone. Different loading conditions and dimensions of core layer for two types of collar joints (with and without shear keys) are considered in the tests. Mechanical properties of the constituent materials of masonry were tested and a database of characteristic properties was created. The results of the experiments will highlight the properties, force-displacement curves, stress distribution of multiple-leaf masonry walls contributing to the derivation of rational design rules and validation of numerical models.

Keywords: masonry, three-leaf walls, mechanical behavior, testing, architectural heritage

Procedia PDF Downloads 285
2150 Radial Distribution Network Reliability Improvement by Using Imperialist Competitive Algorithm

Authors: Azim Khodadadi, Sahar Sadaat Vakili, Ebrahim Babaei

Abstract:

This study presents a numerical method to optimize the failure rate and repair time of a typical radial distribution system. Failure rate and repair time are effective parameters in customer and energy based indices of reliability. Decrease of these parameters improves reliability indices. Thus, system stability will be boost. The penalty functions indirectly reflect the cost of investment which spent to improve these indices. Constraints on customer and energy based indices, i.e. SAIFI, SAIDI, CAIDI and AENS have been considered by using a new method which reduces optimization algorithm controlling parameters. Imperialist Competitive Algorithm (ICA) used as main optimization technique and particle swarm optimization (PSO), simulated annealing (SA) and differential evolution (DE) has been applied for further investigation. These algorithms have been implemented on a test system by MATLAB. Obtained results have been compared with each other. The optimized values of repair time and failure rate are much lower than current values which this achievement reduced investment cost and also ICA gives better answer than the other used algorithms.

Keywords: imperialist competitive algorithm, failure rate, repair time, radial distribution network

Procedia PDF Downloads 661
2149 Application of Computer Aided Engineering Tools in Performance Prediction and Fault Detection of Mechanical Equipment of Mining Process Line

Authors: K. Jahani, J. Razavi

Abstract:

Nowadays, to decrease the number of downtimes in the industries such as metal mining, petroleum and chemical industries, predictive maintenance is crucial. In order to have efficient predictive maintenance, knowing the performance of critical equipment of production line such as pumps and hydro-cyclones under variable operating parameters, selecting best indicators of this equipment health situations, best locations for instrumentation, and also measuring of these indicators are very important. In this paper, computer aided engineering (CAE) tools are implemented to study some important elements of copper process line, namely slurry pumps and cyclone to predict the performance of these components under different working conditions. These modeling and simulations can be used in predicting, for example, the damage tolerance of the main shaft of the slurry pump or wear rate and location of cyclone wall or pump case and impeller. Also, the simulations can suggest best-measuring parameters, measuring intervals, and their locations.

Keywords: computer aided engineering, predictive maintenance, fault detection, mining process line, slurry pump, hydrocyclone

Procedia PDF Downloads 399
2148 3D Human Body Reconstruction Based on Multiple Viewpoints

Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo

Abstract:

The aim of this study was to improve the effects of human body 3D reconstruction. The MvP algorithm was adopted to obtain key point information from multiple perspectives. This algorithm allowed the capture of human posture and joint positions from multiple angles, providing more comprehensive and accurate data. The study also incorporated the SMPL-X model, which has been widely used for human body modeling, to achieve more accurate 3D reconstruction results. The use of the MvP algorithm made it possible to observe the reconstructed object from multiple angles, thus reducing the problems of blind spots and missing information. This algorithm was able to effectively capture key point information, including the position and rotation angle of limbs, providing key data for subsequent 3D reconstruction. Compared with traditional single-view methods, the method of multi-view fusion significantly improved the accuracy and stability of reconstruction. By combining the MvP algorithm with the SMPL-X model, we successfully achieved better human body 3D reconstruction effects. The SMPL-X model is highly scalable and can generate highly realistic 3D human body models, thus providing more detail and shape information.

Keywords: 3D human reconstruction, multi-view, joint point, SMPL-X

Procedia PDF Downloads 63
2147 A Dynamical Approach for Relating Energy Consumption to Hybrid Inventory Level in the Supply Chain

Authors: Benga Ebouele, Thomas Tengen

Abstract:

Due to long lead time, work in process (WIP) inventory can manifest within the supply chain of most manufacturing system. It implies that there are lesser finished good on hand and more in the process because the work remains in the factory too long and cannot be sold to either customers The supply chain of most manufacturing system is then considered as inefficient as it take so much time to produce the finished good. Time consumed in each operation of the supply chain has an associated energy costs. Such phenomena can be harmful for a hybrid inventory system because a lot of space to store these semi-finished goods may be needed and one is not sure about the final energy cost of producing, holding and delivering the good to customers. The principle that reduces waste of energy within the supply chain of most manufacturing firms should therefore be available to all inventory managers in pursuit of profitability. Decision making by inventory managers in this condition is a modeling process, whereby a dynamical approach is used to depict, examine, specify and even operationalize the relationship between energy consumption and hybrid inventory level. The relationship between energy consumption and inventory level is established, which indicates a poor level of control and hence a potential for energy savings.

Keywords: dynamic modelling, energy used, hybrid inventory, supply chain

Procedia PDF Downloads 262
2146 A Development of Portable Intrinsically Safe Explosion-Proof Type of Dual Gas Detector

Authors: Sangguk Ahn, Youngyu Kim, Jaheon Gu, Gyoutae Park

Abstract:

In this paper, we developed a dual gas leak instrument to detect Hydrocarbon (HC) and Monoxide (CO) gases. To two kinds of gases, it is necessary to design compact structure for sensors. And then it is important to draw sensing circuits such as measuring, amplifying and filtering. After that, it should be well programmed with robust, systematic and module coding methods. In center of them, improvement of accuracy and initial response time are a matter of vital importance. To manufacture distinguished gas leak detector, we applied intrinsically safe explosion-proof structure to lithium ion battery, main circuits, a pump with motor, color LCD interfaces and sensing circuits. On software, to enhance measuring accuracy we used numerical analysis such as Lagrange and Neville interpolation. Performance test result is conducted by using standard Methane with seven different concentrations with three other products. We want raise risk prevention and efficiency of gas safe management through distributing to the field of gas safety. Acknowledgment: This study was supported by Small and Medium Business Administration under the research theme of ‘Commercialized Development of a portable intrinsically safe explosion-proof type dual gas leak detector’, (task number S2456036).

Keywords: gas leak, dual gas detector, intrinsically safe, explosion proof

Procedia PDF Downloads 226
2145 Two Dimensional Finite Element Model to Study Calcium Dynamics in Fibroblast Cell with Excess Buffer Approximation Involving ER Flux and SERCA Pump

Authors: Mansha Kotwani

Abstract:

The specific spatio-temporal calcium concentration patterns are required by the fibroblasts to maintain its structure and functions. Thus, calcium concentration is regulated in cell at different levels in various activities of the cell. The variations in cytosolic calcium concentration largely depend on the buffers present in cytosol and influx of calcium into cytosol from ER through IP3Rs or Raynodine receptors followed by reuptake of calcium into ER through sarcoplasmic/endoplasmic reticulum ATPs (SERCA) pump. In order to understand the mechanisms of wound repair, tissue remodeling and growth performed by fibroblasts, it is of crucial importance to understand the mechanisms of calcium concentration regulation in fibroblasts. In this paper, a model has been developed to study calcium distribution in NRK fibroblast in the presence of buffers and ER flux with SERCA pump. The model has been developed for two dimensional unsteady state case. Appropriate initial and boundary conditions have been framed along with physiology of the cell. Finite element technique has been employed to obtain the solution. The numerical results have been used to study the effect of buffers, ER flux and source amplitude on calcium distribution in fibroblast cell.

Keywords: buffers, IP3R, ER flux, SERCA pump, source amplitude

Procedia PDF Downloads 239
2144 Adaptive Anchor Weighting for Improved Localization with Levenberg-Marquardt Optimization

Authors: Basak Can

Abstract:

This paper introduces an iterative and weighted localization method that utilizes a unique cost function formulation to significantly enhance the performance of positioning systems. The system employs locators, such as Gateways (GWs), to estimate and track the position of an End Node (EN). Performance is evaluated relative to the number of locators, with known locations determined through calibration. Performance evaluation is presented utilizing low cost single-antenna Bluetooth Low Energy (BLE) devices. The proposed approach can be applied to alternative Internet of Things (IoT) modulation schemes, as well as Ultra WideBand (UWB) or millimeter-wave (mmWave) based devices. In non-line-of-sight (NLOS) scenarios, using four or eight locators yields a 95th percentile localization performance of 2.2 meters and 1.5 meters, respectively, in a 4,305 square feet indoor area with BLE 5.1 devices. This method outperforms conventional RSSI-based techniques, achieving a 51% improvement with four locators and a 52 % improvement with eight locators. Future work involves modeling interference impact and implementing data curation across multiple channels to mitigate such effects.

Keywords: lateration, least squares, Levenberg-Marquardt algorithm, localization, path-loss, RMS error, RSSI, sensors, shadow fading, weighted localization

Procedia PDF Downloads 17
2143 One-Step Time Series Predictions with Recurrent Neural Networks

Authors: Vaidehi Iyer, Konstantin Borozdin

Abstract:

Time series prediction problems have many important practical applications, but are notoriously difficult for statistical modeling. Recently, machine learning methods have been attracted significant interest as a practical tool applied to a variety of problems, even though developments in this field tend to be semi-empirical. This paper explores application of Long Short Term Memory based Recurrent Neural Networks to the one-step prediction of time series for both trend and stochastic components. Two types of data are analyzed - daily stock prices, that are often considered to be a typical example of a random walk, - and weather patterns dominated by seasonal variations. Results from both analyses are compared, and reinforced learning framework is used to select more efficient between Recurrent Neural Networks and more traditional auto regression methods. It is shown that both methods are able to follow long-term trends and seasonal variations closely, but have difficulties with reproducing day-to-day variability. Future research directions and potential real world applications are briefly discussed.

Keywords: long short term memory, prediction methods, recurrent neural networks, reinforcement learning

Procedia PDF Downloads 224
2142 Numerical Analysis and Influence of the Parameters on Slope Stability

Authors: Fahim Kahlouche, Alaoua Bouaicha, Sihem Chaîbeddra, Sid-Ali Rafa, Abdelhamid Benouali

Abstract:

A designing of a structure requires its realization on rough or sloping ground. Besides the problem of the stability of the landslide, the behavior of the foundations that are bearing the structure is influenced by the destabilizing effect of the ground’s slope. This article focuses on the analysis of the slope stability exposed to loading by introducing the different factors influencing the slope’s behavior on the one hand, and on the influence of this slope on the foundation’s behavior on the other hand. This study is about the elastoplastic modelization using FLAC 2D. This software is based on the finite difference method, which is one of the older methods of numeric resolution of differential equations system with initial and boundary conditions. It was developed for the geotechnical simulation calculation. The aim of this simulation is to demonstrate the notable effect of shear modulus « G », cohesion « C », inclination angle (edge) « β », and distance between the foundation and the head of the slope on the stability of the slope as well as the stability of the foundation. In our simulation, the slope is constituted by homogenous ground. The foundation is considered as rigid/hard; therefore, the loading is made by the application of the vertical strengths on the nodes which represent the contact between the foundation and the ground. 

Keywords: slope, shallow foundation, numeric method, FLAC 2D

Procedia PDF Downloads 280
2141 Use of Multistage Transition Regression Models for Credit Card Income Prediction

Authors: Denys Osipenko, Jonathan Crook

Abstract:

Because of the variety of the card holders’ behaviour types and income sources each consumer account can be transferred to a variety of states. Each consumer account can be inactive, transactor, revolver, delinquent, defaulted and requires an individual model for the income prediction. The estimation of transition probabilities between statuses at the account level helps to avoid the memorylessness of the Markov Chains approach. This paper investigates the transition probabilities estimation approaches to credit cards income prediction at the account level. The key question of empirical research is which approach gives more accurate results: multinomial logistic regression or multistage conditional logistic regression with binary target. Both models have shown moderate predictive power. Prediction accuracy for conditional logistic regression depends on the order of stages for the conditional binary logistic regression. On the other hand, multinomial logistic regression is easier for usage and gives integrate estimations for all states without priorities. Thus further investigations can be concentrated on alternative modeling approaches such as discrete choice models.

Keywords: multinomial regression, conditional logistic regression, credit account state, transition probability

Procedia PDF Downloads 479
2140 Modeling of a Pendulum Test Including Skin and Muscles under Compression

Authors: M. J. Kang, Y. N. Jo, H. H. Yoo

Abstract:

Pendulum tests were used to identify a stretch reflex and diagnose spasticity. Some researches tried to make a mathematical model to simulate the motions. Thighs are subject to compressive forces due to gravity during a pendulum test. Therefore, it affects knee trajectories. However, the most studies on the pendulum tests did not consider that conditions. We used Kelvin-Voight model as compression model of skin and muscles. In this study, we investigated viscoelastic behaviors of skin and muscles using gelatin blocks from experiments of the vibration of the compliantly supported beam. Then we calculated a dynamic stiffness and loss factors from the experiment and estimated a damping coefficient of the model. We also did pendulum tests of human lower limbs to validate the stiffness and damping coefficient of a skin model. To simulate the pendulum motion, we derive equations of motion. We used stretch reflex activation model to estimate muscle forces induced by the stretch reflex. To validate the results, we compared the activation with electromyography signals during experiments. The compression behavior of skin and muscles in this study can be applied to analyze sitting posture as wee as developing surgical techniques.

Keywords: Kelvin-Voight model, pendulum test, skin and muscles under compression, stretch reflex

Procedia PDF Downloads 443
2139 A Survey of Recognizing of Daily Living Activities in Multi-User Smart Home Environments

Authors: Kulsoom S. Bughio, Naeem K. Janjua, Gordana Dermody, Leslie F. Sikos, Shamsul Islam

Abstract:

The advancement in information and communication technologies (ICT) and wireless sensor networks have played a pivotal role in the design and development of real-time healthcare solutions, mainly targeting the elderly living in health-assistive smart homes. Such smart homes are equipped with sensor technologies to detect and record activities of daily living (ADL). This survey reviews and evaluates existing approaches and techniques based on real-time sensor-based modeling and reasoning in single-user and multi-user environments. It classifies the approaches into three main categories: learning-based, knowledge-based, and hybrid, and evaluates how they handle temporal relations, granularity, and uncertainty. The survey also highlights open challenges across various disciplines (including computer and information sciences and health sciences) to encourage interdisciplinary research for the detection and recognition of ADLs and discusses future directions.

Keywords: daily living activities, smart homes, single-user environment, multi-user environment

Procedia PDF Downloads 136
2138 Nonparametric Path Analysis with a Truncated Spline Approach in Modeling Waste Management Behavior Patterns

Authors: Adji Achmad Rinaldo Fernandes, Usriatur Rohma

Abstract:

Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best truncated spline nonparametric path function between linear and quadratic polynomial degrees with 1, 2, and 3 knot points and to determine the significance of estimating the best truncated spline nonparametric path function in the model of the effect of perceived benefits and perceived convenience on behavior to convert waste into economic value through the intention variable of changing people's mindset about waste using the t test statistic at the jackknife resampling stage. The data used in this study are primary data obtained from research grants. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3 knot points. In addition, the significance of the best truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.

Keywords: nonparametric path analysis, truncated spline, linear, kuadratic, behavior to turn waste into economic value, jackknife resampling

Procedia PDF Downloads 35
2137 Effect of Leaks in Solid Oxide Electrolysis Cells Tested for Durability under Co-Electrolysis Conditions

Authors: Megha Rao, Søren H. Jensen, Xiufu Sun, Anke Hagen, Mogens B. Mogensen

Abstract:

Solid oxide electrolysis cells have an immense potential in converting CO2 and H2O into syngas during co-electrolysis operation. The produced syngas can be further converted into hydrocarbons. This kind of technology is called power-to-gas or power-to-liquid. To produce hydrocarbons via this route, durability of the cells is still a challenge, which needs to be further investigated in order to improve the cells. In this work, various nickel-yttria stabilized zirconia (Ni-YSZ) fuel electrode supported or YSZ electrolyte supported cells, cerium gadolinium oxide (CGO) barrier layer, and an oxygen electrode are investigated for durability under co-electrolysis conditions in both galvanostatic and potentiostatic conditions. While changing the gas on the oxygen electrode, keeping the fuel electrode gas composition constant, a change in the gas concentration arc was observed by impedance spectroscopy. Measurements of open circuit potential revealed the presence of leaks in the setup. It is speculated that the change in concentration impedance may be related to the leaks. Furthermore, the cells were also tested under pressurized conditions to find an inter-play between the leak rate and the pressure. A mathematical modeling together with electrochemical and microscopy analysis is presented.

Keywords: co-electrolysis, durability, leaks, gas concentration arc

Procedia PDF Downloads 140
2136 Heuristic Methods for the Capacitated Location- Allocation Problem with Stochastic Demand

Authors: Salinee Thumronglaohapun

Abstract:

The proper number and appropriate locations of service centers can save cost, raise revenue and gain more satisfaction from customers. Establishing service centers is high-cost and difficult to relocate. In long-term planning periods, several factors may affect the service. One of the most critical factors is uncertain demand of customers. The opened service centers need to be capable of serving customers and making a profit although the demand in each period is changed. In this work, the capacitated location-allocation problem with stochastic demand is considered. A mathematical model is formulated to determine suitable locations of service centers and their allocation to maximize total profit for multiple planning periods. Two heuristic methods, a local search and genetic algorithm, are used to solve this problem. For the local search, five different chances to choose each type of moves are applied. For the genetic algorithm, three different replacement strategies are considered. The results of applying each method to solve numerical examples are compared. Both methods reach to the same best found solution in most examples but the genetic algorithm provides better solutions in some cases.

Keywords: location-allocation problem, stochastic demand, local search, genetic algorithm

Procedia PDF Downloads 119
2135 Control Flow around NACA 4415 Airfoil Using Slot and Injection

Authors: Imine Zakaria, Meftah Sidi Mohamed El Amine

Abstract:

One of the most vital aerodynamic organs of a flying machine is the wing, which allows it to fly in the air efficiently. The flow around the wing is very sensitive to changes in the angle of attack. Beyond a value, there is a phenomenon of the boundary layer separation on the upper surface, which causes instability and total degradation of aerodynamic performance called a stall. However, controlling flow around an airfoil has become a researcher concern in the aeronautics field. There are two techniques for controlling flow around a wing to improve its aerodynamic performance: passive and active controls. Blowing and suction are among the active techniques that control the boundary layer separation around an airfoil. Their objective is to give energy to the air particles in the boundary layer separation zones and to create vortex structures that will homogenize the velocity near the wall and allow control. Blowing and suction have long been used as flow control actuators around obstacles. In 1904 Prandtl applied a permanent blowing to a cylinder to delay the boundary layer separation. In the present study, several numerical investigations have been developed to predict a turbulent flow around an aerodynamic profile. CFD code was used for several angles of attack in order to validate the present work with that of the literature in the case of a clean profile. The variation of the lift coefficient CL with the momentum coefficient

Keywords: CFD, control flow, lift, slot

Procedia PDF Downloads 188
2134 Numerical and Experimental Studies on the Characteristic of the Air Distribution in the Wind-Box of a Circulating Fluidized Bed Boiler

Authors: Xiaozhou Liu, Guangyu Zhu, Yu Zhang, Hongwei Wu

Abstract:

The wind-box is one of the important components of a Circulating Fluidized Bed (CFB) boiler. The uniformity of air flow in the wind-box of is very important for highly efficient operation of the CFB boiler. Non-uniform air flow distribution within the wind-box can reduce the boiler's thermal efficiency, leading to higher energy consumptions. An effective measure to solve this problem is to install an air flow distributing device in the wind-box. In order to validate the effectiveness of the air flow distributing device, visual and velocity distribution uniformity experiments have been carried out under five different test conditions by using a 1:64 scale model of a 220t/hr CFB boiler. It has been shown that the z component of flow velocity remains almost the same at control cross-sections of the wind-box, with a maximum variation of less than 10%. Moreover, the same methodology has been carried out to a full-scale 220t/hr CFB boiler. The hot test results depict that the thermal efficiency of the boiler has increased from 85.71% to 88.34% when tested with an air flow distributing device in place, which is equivalent to a saving of 5,000 tons of coal per year. The economic benefits of this energy-saving technology have been shown to be very significant, which clearly demonstrates that the technology is worth applying and popularizing.

Keywords: circulating fluidized bed, CFB, wind-box, air flow distributing device, visual experiment, velocity distribution uniformity experiment, hot test

Procedia PDF Downloads 172
2133 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 161