Search results for: transcranial electrical simulation
4452 Aerodynamic Analysis by Computational Fluids Dynamics in Building: Case Study
Authors: Javier Navarro Garcia, Narciso Vazquez Carretero
Abstract:
Eurocode 1, part 1-4, wind actions, includes in its article 1.5 the possibility of using numerical calculation methods to obtain information on the loads acting on a building. On the other hand, the analysis using computational fluids dynamics (CFD) in aerospace, aeronautical, and industrial applications is already in widespread use. The application of techniques based on CFD analysis on the building to study its aerodynamic behavior now opens a whole alternative field of possibilities for civil engineering and architecture; optimization of the results with respect to those obtained by applying the regulations, the possibility of obtaining information on pressures, speeds at any point of the model for each moment, the analysis of turbulence and the possibility of modeling any geometry or configuration. The present work compares the results obtained on a building, with respect to its aerodynamic behavior, from a mathematical model based on the analysis by CFD with the results obtained by applying Eurocode1, part1-4, wind actions. It is verified that the results obtained by CFD techniques suppose an optimization of the wind action that acts on the building with respect to the wind action obtained by applying the Eurocode1, part 1-4, wind actions. In order to carry out this verification, a 45m high square base truncated pyramid building has been taken. The mathematical model on CFD, based on finite volumes, has been calculated using the FLUENT commercial computer application using a scale-resolving simulation (SRS) type large eddy simulation (LES) turbulence model for an atmospheric boundary layer wind with turbulent component in the direction of the flow.Keywords: aerodynamic, CFD, computacional fluids dynamics, computational mechanics
Procedia PDF Downloads 1374451 Global Healthcare Village Based on Mobile Cloud Computing
Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar
Abstract:
Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy
Procedia PDF Downloads 3774450 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 2024449 Evaluation of the Gasification Process for the Generation of Syngas Using Solid Waste at the Autónoma de Colombia University
Authors: Yeraldin Galindo, Soraida Mora
Abstract:
Solid urban waste represents one of the largest sources of global environmental pollution due to the large quantities of these that are produced every day; thus, the elimination of such waste is a major problem for the environmental authorities who must look for alternatives to reduce the volume of waste with the possibility of obtaining an energy recovery. At the Autónoma de Colombia University, approximately 423.27 kg/d of solid waste are generated mainly paper, cardboard, and plastic. A large amount of these solid wastes has as final disposition the sanitary landfill of the city, wasting the energy potential that these could have, this, added to the emissions generated by the collection and transport of the same, has as consequence the increase of atmospheric pollutants. One of the alternative process used in the last years to generate electrical energy from solid waste such as paper, cardboard, plastic and, mainly, organic waste or biomass to replace the use of fossil fuels is the gasification. This is a thermal conversion process of biomass. The objective of it is to generate a combustible gas as the result of a series of chemical reactions propitiated by the addition of heat and the reaction agents. This project was developed with the intention of giving an energetic use to the waste (paper, cardboard, and plastic) produced inside the university, using them to generate a synthesis gas with a gasifier prototype. The gas produced was evaluated to determine their benefits in terms of electricity generation or raw material for the chemical industry. In this process, air was used as gasifying agent. The characterization of the synthesis gas was carried out by a gas chromatography carried out by the Chemical Engineering Laboratory of the National University of Colombia. Taking into account the results obtained, it was concluded that the gas generated is of acceptable quality in terms of the concentration of its components, but it is a gas of low calorific value. For this reason, the syngas generated in this project is not viable for the production of electrical energy but for the production of methanol transformed by the Fischer-Tropsch cycle.Keywords: alternative energies, gasification, gasifying agent, solid urban waste, syngas
Procedia PDF Downloads 2584448 Implementation of an Accessible State-Wide Trauma Education Program
Authors: Christine Lassen, Elizabeth Leonard, Matthew Oliver
Abstract:
The management of trauma is often complex and outcomes dependent on clinical expertise, effective teamwork, and a supported trauma system. The implementation of a statewide trauma education program should be accessible to all clinicians who manage trauma, but this can be challenging due to diverse individual needs, trauma service needs and geography. The NSW Institute of Trauma and Injury Management (ITIM) is a government funded body, responsible for coordinating and supporting the NSW Trauma System. The aim of this presentation is to describe how education initiatives have been implemented across the state. Simulation: In 2006, ITIM developed a Trauma Team Training Course - aimed to educate clinicians on the technical and non-technical skills required to manage trauma. The course is now independently coordinated by trauma services across the state at major trauma centres as well as in regional and rural hospitals. ITIM is currently in the process of re-evaluating and updating the Trauma Team Training Course to allow for the development of new resources and simulation scenarios. Trauma Education Evenings: In 2013, ITIM supported major trauma services to develop trauma education evenings which allowed the provision of free education to staff within the area health service and local area. The success of these local events expanded to regional hospitals. A total of 75 trauma education evenings have been conducted within NSW, with over 10,000 attendees. Wed-Based Resources: Recently, ITIM commenced free live streaming of the trauma education evenings which have now had over 3000 live views. The Trauma App developed in 2015 provides trauma clinicians with a centralised portal for trauma information and works on smartphones and tablets that integrate with the ITIM website. This supports pre-hospital and bedside clinical decisions and allows for trauma care to be more standardised, evidence-based, timely, and appropriate. Online e-Learning modules have been developed to assist clinicians, reduce unwarranted clinical variation and provide up to date evidence based education. The modules incorporate clinically focused education content with summative and formative assessments. Conclusion: Since 2005, ITIM has helped to facilitate the development of trauma education programs for doctors, nurses, pre-hospital and allied health clinicians. ITIM has been actively involved in more than 100 specialized trauma education programs, seminars and clinical workshops - attended by over 12,000 staff. The provision of state-wide trauma education is a challenging task requiring collaboration amongst numerous agencies working towards a common goal – to provide easily accessible trauma education.Keywords: education, simulation, team-training, trauma
Procedia PDF Downloads 1884447 Thermal Simulation for Urban Planning in Early Design Phases
Authors: Diego A. Romero Espinosa
Abstract:
Thermal simulations are used to evaluate comfort and energy consumption of buildings. However, the performance of different urban forms cannot be assessed precisely if an environmental control system and user schedules are considered. The outcome of such analysis would lead to conclusions that combine the building use, operation, services, envelope, orientation and density of the urban fabric. The influence of these factors varies during the life cycle of a building. The orientation, as well as the surroundings, can be considered a constant during the lifetime of a building. The structure impacts the thermal inertia and has the largest lifespan of all the building components. On the other hand, the building envelope is the most frequent renovated component of a building since it has a great impact on energy performance and comfort. Building services have a shorter lifespan and are replaced regularly. With the purpose of addressing the performance, an urban form, a specific orientation, and density, a thermal simulation method were developed. The solar irradiation is taken into consideration depending on the outdoor temperature. Incoming irradiation at low temperatures has a positive impact increasing the indoor temperature. Consequently, overheating would be the combination of high outdoor temperature and high irradiation at the façade. On this basis, the indoor temperature is simulated for a specific orientation of the evaluated urban form. Thermal inertia and building envelope performance are considered additionally as the materiality of the building. The results of different thermal zones are summarized using the 'Degree day method' for cooling and heating. During the early phase of a design process for a project, such as Masterplan, conclusions regarding urban form, density and materiality can be drawn by means of this analysis.Keywords: building envelope, density, masterplanning, urban form
Procedia PDF Downloads 1454446 Numerical Studies on 2D and 3D Boundary Layer Blockage and External Flow Choking at Wing in Ground Effect
Authors: K. Dhanalakshmi, N. Deepak, E. Manikandan, S. Kanagaraj, M. Sulthan Ariff Rahman, P. Chilambarasan C. Abhimanyu, C. A. Akaash Emmanuel Raj, V. R. Sanal Kumar
Abstract:
In this paper using a validated double precision, density-based implicit standard k-ε model, the detailed 2D and 3D numerical studies have been carried out to examine the external flow choking at wing-in-ground (WIG) effect craft. The CFD code is calibrated using the exact solution based on the Sanal flow choking condition for adiabatic flows. We observed that at the identical WIG effect conditions the numerically predicted 2D boundary layer blockage is significantly higher than the 3D case and as a result, the airfoil exhibited an early external flow choking than the corresponding wing, which is corroborated with the exact solution. We concluded that, in lieu of the conventional 2D numerical simulation, it is invariably beneficial to go for a realistic 3D simulation of the wing in ground effect, which is analogous and would have the aspects of a real-time parametric flow. We inferred that under the identical flying conditions the chances of external flow choking at WIG effect is higher for conventional aircraft than an aircraft facilitating a divergent channel effect at the bottom surface of the fuselage as proposed herein. We concluded that the fuselage and wings integrated geometry optimization can improve the overall aerodynamic performance of WIG craft. This study is a pointer to the designers and/or pilots for perceiving the zone of danger a priori due to the anticipated external flow choking at WIG effect craft for safe flying at the close proximity of the terrain and the dynamic surface of the marine.Keywords: boundary layer blockage, chord dominated ground effect, external flow choking, WIG effect
Procedia PDF Downloads 2714445 Assessing the Impact of Autonomous Vehicles on Supply Chain Performance – A Case Study of Agri-Food Supply Chain
Authors: Nitish Suvarna, Anjali Awasthi
Abstract:
In an era marked by rapid technological advancements, the integration of Autonomous Vehicles into supply chain networks represents a transformative shift, promising to redefine the paradigms of logistics and transportation. This thesis delves into a comprehensive assessment of the impact of autonomous vehicles on supply chain performance, with a particular focus on network design, operational efficiency, and environmental sustainability. Employing the advanced simulation capabilities of anyLogistix (ALX), the study constructs a digital twin of a conventional supply chain network, encompassing suppliers, production facilities, distribution centers, and customer endpoints. The research methodically integrates Autonomous Vehicles into this intricate network, aiming to unravel the multifaceted effects on transportation logistics including transit times, cost-efficiency, and sustainability. Through simulations and scenarios analysis, the study scrutinizes the operational resilience and adaptability of supply chains in the face of dynamic market conditions and disruptive technologies like Autonomous Vehicles. Furthermore, the thesis undertakes carbon footprint analysis, quantifying the environmental benefits and challenges associated with the adoption of Autonomous Vehicles in supply chain operations. The insights from this research are anticipated to offer a strategic framework for industry stakeholders, guiding the adoption of Autonomous Vehicles to foster a more efficient, responsive, and sustainable supply chain ecosystem. The findings aim to serve as a cornerstone for future research and practical implementations in the realm of intelligent transportation and supply chain management.Keywords: autonomous vehicle, agri-food supply chain, ALX simulation, anyLogistix
Procedia PDF Downloads 754444 A Development of a Simulation Tool for Production Planning with Capacity-Booking at Specialty Store Retailer of Private Label Apparel Firms
Authors: Erika Yamaguchi, Sirawadee Arunyanrt, Shunichi Ohmori, Kazuho Yoshimoto
Abstract:
In this paper, we suggest a simulation tool to make a decision of monthly production planning for maximizing a profit of Specialty store retailer of Private label Apparel (SPA) firms. Most of SPA firms are fabless and make outsourcing deals for productions with factories of their subcontractors. Every month, SPA firms make a booking for production lines and manpower in the factories. The booking is conducted a few months in advance based on a demand prediction and a monthly production planning at that time. However, the demand prediction is updated month by month, and the monthly production planning would change to meet the latest demand prediction. Then, SPA firms have to change the capacities initially booked within a certain range to suit to the monthly production planning. The booking system is called “capacity-booking”. These days, though it is an issue for SPA firms to make precise monthly production planning, many firms are still conducting the production planning by empirical rules. In addition, it is also a challenge for SPA firms to match their products and factories with considering their demand predictabilities and regulation abilities. In this paper, we suggest a model for considering these two issues. An objective is to maximize a total profit of certain periods, which is sales minus costs of production, inventory, and capacity-booking penalty. To make a better monthly production planning at SPA firms, these points should be considered: demand predictabilities by random trends, previous and next month’s production planning of the target month, and regulation abilities of the capacity-booking. To decide matching products and factories for outsourcing, it is important to consider seasonality, volume, and predictability of each product, production possibility, size, and regulation ability of each factory. SPA firms have to consider these constructions and decide orders with several factories per one product. We modeled these issues as a linear programming. To validate the model, an example of several computational experiments with a SPA firm is presented. We suppose four typical product groups: basic, seasonal (Spring / Summer), seasonal (Fall / Winter), and spot product. As a result of the experiments, a monthly production planning was provided. In the planning, demand predictabilities from random trend are reduced by producing products which are different product types. Moreover, priorities to produce are given to high-margin products. In conclusion, we developed a simulation tool to make a decision of monthly production planning which is useful when the production planning is set every month. We considered the features of capacity-booking, and matching of products and factories which have different features and conditions.Keywords: capacity-booking, SPA, monthly production planning, linear programming
Procedia PDF Downloads 5194443 Large Core Silica Few-Mode Optical Fibers with Reduced Differential Mode Delay and Enhanced Mode Effective Area over 'C'-Band
Authors: Anton V. Bourdine, Vladimir A. Burdin, Oleg R. Delmukhametov
Abstract:
This work presents a fast and simple method for the design of large core silica optical fibers with differential mode delay (DMD) management. Some results are reported concerned with refractive index profile optimization for 42 µm core 16-LP-mode optical fiber for next-generation optical networks. Here special refractive index profile form provides total DMD reducing over all mode staff under desired enhanced mode effective area. Method for the simulation of 'real manufactured' few-mode optical fiber (FMF) core geometry differing from the desired optimized structure by core non-symmetrical ellipticity and refractive index profile deviation including local fluctuations is proposed. Results of the following analysis of optimized FMF with inserted geometry distortions performed by earlier on developed modification of rigorous mixed finite-element method showed strong DMD degradation that requires additional higher-order mode management. In addition, this work also presents a method for design mode division multiplexer channel precision spatial positioning scheme at FMF core end that provides one of the potentiality solutions of described DMD degradation problem concerned with 'distorted' core geometry due to features of optical fiber manufacturing techniques.Keywords: differential mode delay, few-mode optical fibers, nonlinear Shannon limit, optical fiber non-circularity, ‘real manufactured’ optical fiber core geometry simulation, refractive index profile optimization
Procedia PDF Downloads 1574442 Structural and Microstructural Analysis of White Etching Layer Formation by Electrical Arcing Induced on the Surface of Rail Track
Authors: Ali Ahmed Ali Al-Juboori, H. Zhu, D. Wexler, H. Li, C. Lu, J. McLeod, S. Pannila, J. Barnes
Abstract:
A number of studies have focused on the formation mechanics of white etching layer and its origin in the railway operation. Until recently, the following hypotheses consider the precise mechanics of WELs formation: (i) WELs are the result of thermal process caused by wheel slip; (ii) WELs are mechanically induced by severe plastic deformation; (iii) WELs are caused by a combination of thermo-mechanical process. The mechanisms discussed above lead to occurrence of white etching layers on the area of wheel and rail contact. This is because the contact patch which is the active point of the wheel on the rail is exposed to highest shear stresses which result in localised severe plastic deformation; and highest rate of heat caused by wheel slipe during excessive traction or braking effort. However, if the WELs are not on the running band area, it would suggest that there is another cause of WELs formation. In railway system, particularly electrified railway, arcing phenomenon has been occurring more often and regularly on the rails. In electrified railway, the current is delivered to the train traction motor via contact wires and then returned to the station via the contact between the wheel and the rail. If the contact between the wheel and the rail is temporarily losing, due to dynamic vibration, entrapped dirt or water, lubricant effect or oxidation occurrences, high current can jump through the gap and results in arcing. The other resources of arcing also include the wheel passage the insulated joint and lightning on a train during bad weather. During the arcing, an extensive heat is generated and speared over a large area of top surface of rail. Thus, arcing is considered another heat source in the rail head (rather than wheel slipe) that results in microstructural changes and white etching layer formation. A head hardened (HH) rail steel, cut from a curved rail truck was used for the investigation. Samples were sectioned from a depth of 10 mm below the rail surface, where the material is considered to be still within the hardened layer but away from any microstructural changes on the top surface layer caused by train passage. These samples were subjected to electrical discharges by using Gas Tungsten Arc Welding (GTAW) machine. The arc current was controlled and moved along the samples surface in the direction of travel, as indicated by an arrow. Five different conditions were applied on the surface of the samples. Samples containing pre-existed WELs, taken from ex-service rail surface, were also considered in this study for comparison. Both simulated and ex-serviced WELs were characterised by advanced methods including SEM, TEM, TKD, EDS, XRD. Samples for TEM and TKFD were prepared by Focused Ion Beam (FIB) milling. The results showed that both simulated WELs by electrical arcing and ex-service WEL comprise similar microstructure. Brown etching layer was found with WELs and likely induced by a concurrent tempering process. This study provided a clear understanding of new formation mechanics of WELs which contributes to track maintenance procedure.Keywords: white etching layer, arcing, brown etching layer, material characterisation
Procedia PDF Downloads 1214441 Process Development of pVAX1/lacZ Plasmid DNA Purification Using Design of Experiment
Authors: Asavasereerat K., Teacharsripaitoon T., Tungyingyong P., Charupongrat S., Noppiboon S. Hochareon L., Kitsuban P.
Abstract:
Third generation of vaccines is based on gene therapy where DNA is introduced into patients. The antigenic or therapeutic proteins encoded from transgenes DNA triggers an immune-response to counteract various diseases. Moreover, DNA vaccine offers the customization of its ability on protection and treatment with high stability. The production of DNA vaccines become of interest. According to USFDA guidance for industry, the recommended limits for impurities from host cell are lower than 1%, and the active conformation homogeneity supercoiled DNA, is more than 80%. Thus, the purification strategy using two-steps chromatography has been established and verified for its robustness. Herein, pVax1/lacZ, a pre-approved USFDA DNA vaccine backbone, was used and transformed into E. coli strain DH5α. Three purification process parameters including sample-loading flow rate, the salt concentration in washing and eluting buffer, were studied and the experiment was designed using response surface method with central composite face-centered (CCF) as a model. The designed range of selected parameters was 10% variation from the optimized set point as a safety factor. The purity in the percentage of supercoiled conformation obtained from each chromatography step, AIEX and HIC, were analyzed by HPLC. The response data were used to establish regression model and statistically analyzed followed by Monte Carlo simulation using SAS JMP. The results on the purity of the product obtained from AIEX and HIC are between 89.4 to 92.5% and 88.3 to 100.0%, respectively. Monte Carlo simulation showed that the pVAX1/lacZ purification process is robust with confidence intervals of 0.90 in range of 90.18-91.00% and 95.88-100.00%, for AIEX and HIC respectively.Keywords: AIEX, DNA vaccine, HIC, puification, response surface method, robustness
Procedia PDF Downloads 2084440 Geostatistical Simulation of Carcinogenic Industrial Effluent on the Irrigated Soil and Groundwater, District Sheikhupura, Pakistan
Authors: Asma Shaheen, Javed Iqbal
Abstract:
The water resources are depleting due to an intrusion of industrial pollution. There are clusters of industries including leather tanning, textiles, batteries, and chemical causing contamination. These industries use bulk quantity of water and discharge it with toxic effluents. The penetration of heavy metals through irrigation from industrial effluent has toxic effect on soil and groundwater. There was strong positive significant correlation between all the heavy metals in three media of industrial effluent, soil and groundwater (P < 0.001). The metal to the metal association was supported by dendrograms using cluster analysis. The geospatial variability was assessed by using geographically weighted regression (GWR) and pollution model to identify the simulation of carcinogenic elements in soil and groundwater. The principal component analysis identified the metals source, 48.8% variation in factor 1 have significant loading for sodium (Na), calcium (Ca), magnesium (Mg), iron (Fe), chromium (Cr), nickel (Ni), lead (Pb) and zinc (Zn) of tannery effluent-based process. In soil and groundwater, the metals have significant loading in factor 1 representing more than half of the total variation with 51.3 % and 53.6 % respectively which showed that pollutants in soil and water were driven by industrial effluent. The cumulative eigen values for the three media were also found to be greater than 1 representing significant clustering of related heavy metals. The results showed that heavy metals from industrial processes are seeping up toxic trace metals in the soil and groundwater. The poisonous pollutants from heavy metals turned the fresh resources of groundwater into unusable water. The availability of fresh water for irrigation and domestic use is being alarming.Keywords: groundwater, geostatistical, heavy metals, industrial effluent
Procedia PDF Downloads 2294439 Estimating Interdependence of Social Statuses in a Cooperative Breeding Birds through Mathematical Modelling
Authors: Sinchan Ghosh, Fahad Al Basir, Santanu Ray, Sabyasachi Bhattacharya
Abstract:
The cooperatively breeding birds have two major ranks for the sexually mature birds. The breeders mate and produce offspring while the non-breeding helpers increase the chick production rate through help in mate-finding and allo-parenting. However, the chicks also cooperate to raise their younger siblings through warming, defending and food sharing. Although, the existing literatures describes the evolution of allo-parenting in birds but do not differentiate the significance of allo-parenting in sexually immature and mature helpers separately. This study addresses the significance of both immature and mature helpers’ contribution to the total sustainable bird population in a breeding site using Blue-tailed bee-eater as a test-bed species. To serve this purpose, a mathematical model has been built considering each social status and chicks as separate but interactive compartments. Also, to observe the dynamics of each social status with changing prey abundance, a prey population has been introduced as an additional compartment. The model was analyzed for stability condition and was validated using field-data. A simulation experiment was then performed to observe the change in equilibria with a varying helping rate from both the helpers. The result from the simulation experiment suggest that the cooperative breeding population changes its population sizes significantly with a change in helping rate from the sexually immature helpers. On the other hand, the mature helpers do not contribute to the stability of the population equilibrium as much as the immature helpers.Keywords: Blue-tailed bee eater, Altruism, Mathematical Ethology, Behavioural modelling
Procedia PDF Downloads 1624438 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence
Authors: K. N. Kiran, S. Anish
Abstract:
It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow
Procedia PDF Downloads 3494437 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach
Authors: M. Taheri Tehrani, H. Ajorloo
Abstract:
In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems
Procedia PDF Downloads 5184436 Optimization of SOL-Gel Copper Oxide Layers for Field-Effect Transistors
Authors: Tomas Vincze, Michal Micjan, Milan Pavuk, Martin Weis
Abstract:
In recent years, alternative materials are gaining attention to replace polycrystalline and amorphous silicon, which are a standard for low requirement devices, where silicon is unnecessarily and high cost. For that reason, metal oxides are envisioned as the new materials for these low-requirement applications such as sensors, solar cells, energy storage devices, or field-effect transistors. Their most common way of layer growth is sputtering; however, this is a high-cost fabrication method, and a more industry-suitable alternative is the sol-gel method. In this group of materials, many oxides exhibit a semiconductor-like behavior with sufficiently high mobility to be applied as transistors. The sol-gel method is a cost-effective deposition technique for semiconductor-based devices. Copper oxides, as p-type semiconductors with free charge mobility up to 1 cm2/Vs., are suitable replacements for poly-Si or a-Si:H devices. However, to reach the potential of silicon devices, a fine-tuning of material properties is needed. Here we focus on the optimization of the electrical parameters of copper oxide-based field-effect transistors by modification of precursor solvent (usually 2-methoxy ethanol). However, to achieve solubility and high-quality films, a better solvent is required. Since almost no solvents have both high dielectric constant and high boiling point, an alternative approach was proposed with blend solvents. By mixing isopropyl alcohol (IPA) and 2-methoxy ethanol (2ME) the precursor reached better solubility. The quality of the layers fabricated using mixed solutions was evaluated in accordance with the surface morphology and electrical properties. The IPA:2ME solution mixture reached optimum results for the weight ratio of 1:3. The cupric oxide layers for optimal mixture had the highest crystallinity and highest effective charge mobility.Keywords: copper oxide, field-effect transistor, semiconductor, sol-gel method
Procedia PDF Downloads 1354435 Global Solar Irradiance: Data Imputation to Analyze Complementarity Studies of Energy in Colombia
Authors: Jeisson A. Estrella, Laura C. Herrera, Cristian A. Arenas
Abstract:
The Colombian electricity sector has been transforming through the insertion of new energy sources to generate electricity, one of them being solar energy, which is being promoted by companies interested in photovoltaic technology. The study of this technology is important for electricity generation in general and for the planning of the sector from the perspective of energy complementarity. Precisely in this last approach is where the project is located; we are interested in answering the concerns about the reliability of the electrical system when climatic phenomena such as El Niño occur or in defining whether it is viable to replace or expand thermoelectric plants. Reliability of the electrical system when climatic phenomena such as El Niño occur, or to define whether it is viable to replace or expand thermoelectric plants with renewable electricity generation systems. In this regard, some difficulties related to the basic information on renewable energy sources from measured data must first be solved, as these come from automatic weather stations. Basic information on renewable energy sources from measured data, since these come from automatic weather stations administered by the Institute of Hydrology, Meteorology and Environmental Studies (IDEAM) and, in the range of study (2005-2019), have significant amounts of missing data. For this reason, the overall objective of the project is to complete the global solar irradiance datasets to obtain time series to develop energy complementarity analyses in a subsequent project. Global solar irradiance data sets to obtain time series that will allow the elaboration of energy complementarity analyses in the following project. The filling of the databases will be done through numerical and statistical methods, which are basic techniques for undergraduate students in technical areas who are starting out as researchers technical areas who are starting out as researchers.Keywords: time series, global solar irradiance, imputed data, energy complementarity
Procedia PDF Downloads 714434 Discussion on Microstructural Changes Caused by Deposition Temperature of LZO Doped Mg Piezoelectric Films
Authors: Cheng-Ying Li, Sheng-Yuan Chu
Abstract:
This article deposited LZO-doped Mg piezoelectric thin films via RF sputtering and observed microstructure and electrical characteristics by varying the deposition temperature. The XRD analysis results indicate that LZO-doped Mg exhibits excellent (002) orientation, and there is no presence of ZnO(100), Influenced by the temperature's effect on the lattice constant, the (002) peak intensity increases with rising temperature. Finally, we conducted deformation intensity analysis on the films, revealing an over fourfold increase in deformation at a processing temperature of 500°C.Keywords: RF sputtering, piezoelectricity, ZnO, Mg
Procedia PDF Downloads 424433 Microsimulation of Potential Crashes as a Road Safety Indicator
Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale
Abstract:
Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.Keywords: road safety, traffic, traffic safety, traffic simulation
Procedia PDF Downloads 1354432 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 2084431 Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control
Authors: Oliver Ohneiser, Francesca De Crescenzio, Gianluca Di Flumeri, Jan Kraemer, Bruno Berberian, Sara Bagassi, Nicolina Sciaraffa, Pietro Aricò, Gianluca Borghini, Fabio Babiloni
Abstract:
An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.Keywords: automation, human factors, air traffic controller, MINIMA, OOTL (Out-Of-The-Loop), EEG (Electroencephalography), HMI (Human Machine Interface)
Procedia PDF Downloads 3834430 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements
Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono
Abstract:
The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement
Procedia PDF Downloads 2814429 Humins: From Industrial By-Product to High Value Polymers
Authors: Pierluigi Tosi, Ed de Jong, Gerard van Klink, Luc Vincent, Alice Mija
Abstract:
During the last decades renewable and low-cost resources have attracted increasingly interest. Carbohydrates can be derived by lignocellulosic biomasses, which is an attractive option since they represent the most abundant carbon source available in nature. Carbohydrates can be converted in a plethora of industrially relevant compounds, such as 5-hydroxymethylfurfural (HMF) and levulinic acid (LA), within acid catalyzed dehydration of sugars with mineral acids. Unfortunately, these acid catalyzed conversions suffer of the unavoidable formation of highly viscous heterogeneous poly-disperse carbon based materials known as humins. This black colored low value by-product is made by a complex mixture of macromolecules built by covalent random condensations of the several compounds present during the acid catalyzed conversion. Humins molecular structure is still under investigation but seems based on furanic rings network linked by aliphatic chains and decorated by several reactive moieties (ketones, aldehydes, hydroxyls, …). Despite decades of research, currently there is no way to avoid humins formation. The key parameter for enhance the economic viability of carbohydrate conversion processes is, therefore, increasing the economic value of the humins by-product. Herein are presented new humins based polymeric materials that can be prepared starting from the raw by-product by thermal treatment, without any step of purification or pretreatment. Humins foams can be produced with the control of reaction key parameters, obtaining polymeric porous materials with designed porosity, density, thermal and electrical conductivity, chemical and electrical stability, carbon amount and mechanical properties. Physico chemical properties can be enhanced by modifications on the starting raw material or adding different species during the polymerization. A comparisons on the properties of different compositions will be presented, along with tested applications. The authors gratefully acknowledge the European Community for financial support through Marie-Curie H2020-MSCA-ITN-2015 "HUGS" Project.Keywords: by-product, humins, polymers, valorization
Procedia PDF Downloads 1434428 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 2704427 CFD Analysis of the Blood Flow in Left Coronary Bifurcation with Variable Angulation
Authors: Midiya Khademi, Ali Nikoo, Shabnam Rahimnezhad Baghche Jooghi
Abstract:
Cardiovascular diseases (CVDs) are the main cause of death globally. Most CVDs can be prevented by avoiding habitual risk factors. Separate from the habitual risk factors, there are some inherent factors in each individual that can increase the risk potential of CVDs. Vessel shapes and geometry are influential factors, having great impact on the blood flow and the hemodynamic behavior of the vessels. In the present study, the influence of bifurcation angle on blood flow characteristics is studied. In order to approach this topic, by simplifying the details of the bifurcation, three models with angles 30°, 45°, and 60° were created, then by using CFD analysis, the response of these models for stable flow and pulsatile flow was studied. In the conducted simulation in order to eliminate the influence of other geometrical factors, only the angle of the bifurcation was changed and other parameters remained constant during the research. Simulations are conducted under dynamic and stable condition. In the stable flow simulation, a steady velocity of 0.17 m/s at the inlet plug was maintained and in dynamic simulations, a typical LAD flow waveform is implemented. The results show that the bifurcation angle has an influence on the maximum speed of the flow. In the stable flow condition, increasing the angle lead to decrease the maximum flow velocity. In the dynamic flow simulations, increasing the bifurcation angle lead to an increase in the maximum velocity. Since blood flow has pulsatile characteristics, using a uniform velocity during the simulations can lead to a discrepancy between the actual results and the calculated results.Keywords: coronary artery, cardiovascular disease, bifurcation, atherosclerosis, CFD, artery wall shear stress
Procedia PDF Downloads 1644426 The Assessment of Natural Ventilation Performance for Thermal Comfort in Educational Space: A Case Study of Design Studio in the Arab Academy for Science and Technology, Alexandria
Authors: Alaa Sarhan, Rania Abd El Gelil, Hana Awad
Abstract:
Through the last decades, the impact of thermal comfort on the working performance of users and occupants of an indoor space has been a concern. Research papers concluded that natural ventilation quality directly impacts the levels of thermal comfort. Natural ventilation must be put into account during the design process in order to improve the inhabitant's efficiency and productivity. One example of daily long-term occupancy spaces is educational facilities. Many individuals spend long times receiving a considerable amount of knowledge, and it takes additional time to apply this knowledge. Thus, this research is concerned with user's level of thermal comfort in design studios of educational facilities. The natural ventilation quality in spaces is affected by a number of parameters including orientation, opening design, and many other factors. This research aims to investigate the conscious manipulation of the physical parameters of the spaces and its impact on natural ventilation performance which subsequently affects thermal comfort of users. The current research uses inductive and deductive methods to define natural ventilation design considerations, which are used in a field study in a studio in the university building in Alexandria (AAST) to evaluate natural ventilation performance through analyzing and comparing the current case to the developed framework and conducting computational fluid dynamics simulation. Results have proved that natural ventilation performance is successful by only 50% of the natural ventilation design framework; these results are supported by CFD simulation.Keywords: educational buildings, natural ventilation, , mediterranean climate, thermal comfort
Procedia PDF Downloads 2214425 A Dual Spark Ignition Timing Influence for the High Power Aircraft Radial Engine Using a CFD Transient Modeling
Authors: Tytus Tulwin, Ksenia Siadkowska, Rafał Sochaczewski
Abstract:
A high power radial reciprocating engine is characterized by a large displacement volume of a combustion chamber. Choosing the right moment for ignition is important for a high performance or high reliability and ignition certainty. This work shows methods of simulating ignition process and its impact on engine parameters. For given conditions a flame speed is limited when a deflagration combustion takes place. Therefore, a larger length scale of the combustion chamber compared to a standard size automotive engine makes combustion take longer time to propagate. In order to speed up the mixture burn-up time the second spark is introduced. The transient Computational Fluid Dynamics model capable of simulating multicycle engine processes was developed. The CFD model consists of ECFM-3Z combustion and species transport models. A relative ignition timing difference for the both spark sources is constant. The temperature distribution on engine walls was calculated in the separate conjugate heat transfer simulation. The in-cylinder pressure validation was performed for take-off power flight conditions. The influence of ignition timing on parameters like in-cylinder temperature or rate of heat release was analyzed. The most advantageous spark timing for the highest power output was chosen. The conditions around the spark plug locations for the pre-ignition period were analyzed. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: CFD, combustion, ignition, simulation, timing
Procedia PDF Downloads 2964424 Finite Element Model to Evaluate Gas Conning Phenomenon in Naturally Fractured Oil Reservoirs
Authors: Reda Abdel Azim
Abstract:
Gas conning phenomenon considered one of the prevalent matter in oil field applications as it significantly affects the amount of produced oil, increase cost of production operation and it has a direct effect on oil reservoirs recovery efficiency as well. Therefore, evaluation of such phenomenon and study the reservoir mechanisms that may strongly affect invading gas to the producing formation is crucial. Gas conning is a result of an imbalance between two major forces controlling the oil production: gravitational and viscous forces especially in naturally fractured reservoirs where the capillary pressure forces are negligible. Once the gas invading the producing formation near the wellbore due to large producing oil rate, the oil gas contact will change and such reservoirs are prone to gas conning. Moreover, the oil volume expected to be produced requires the use of long horizontal perforated well. This work presents a numerical simulation study to predict and propose solutions to gas coning in naturally fractured oil reservoirs. The simulation work is based on discrete fractures and permeability tensors approaches. The governing equations are discretized using finite element approach and Galerkin’s least square technique (GLS) is employed to stabilize the equation solutions. The developed simulator is validated against Eclipse-100 using horizontal fractures. The matrix and fracture properties are modelled. Critical rate, breakthrough time and GOR are determined to be used in investigation of the effect of matrix and fracture properties on gas coning. Results show that fracture distribution in terms of diverse dip and azimuth has a great effect on conning occurring. In addition, fracture porosity, anisotropy ratio, and fracture aperture.Keywords: gas conning, finite element, fractured reservoirs, multiphase
Procedia PDF Downloads 1954423 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility
Authors: Dicko Ali Hamadi, Tong-Yette Nicolas, Gilles Benjamin, Faure Francois, Palombi Olivier
Abstract:
A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.Keywords: hybrid, modeling, fast simulation, lumbar spine
Procedia PDF Downloads 306