Search results for: discrete element simulation
4643 Microsimulation of Potential Crashes as a Road Safety Indicator
Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale
Abstract:
Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.Keywords: road safety, traffic, traffic safety, traffic simulation
Procedia PDF Downloads 1354642 Development of a Porous Porcelain Frape with Thermochromic Visualization
Authors: Jose Gois
Abstract:
The paper presents the development of a porous porcelain frappe with thermochromic visualization for port wines, having as a partner the Institute of Vinhos do Douro and Porto. This ceramic frappe is intended to promote the cooling and maintenance of the temperature of port wines through porous ceramic materials, consisting of a porcelain composite with sawdust addition, so as to contain, on the one hand, the similar cooling properties of the terracotta and, on the other, the resistance of materials such as porcelain. The application of the thermochromic element makes it possible to see if the wine is at optimal service temperatures, allowing users to drink the wine in the ideal conditions and contributing to more efficient maintenance of the service.Keywords: design, frappe, porcelain, porous, thermochromic
Procedia PDF Downloads 1354641 A Novel Combustion Engine, Design and Modeling
Authors: M. A. Effati, M. R. Hojjati, M. Razmdideh
Abstract:
Nowadays, engine developments have focused on internal combustion engine design call for increased engine power, reduced engine size and improved fuel economy, simultaneously. In this paper, a novel design for combustion engine is proposed. Two combustion chambers were designed in two sides of cylinder. Piston was designed in a way that two sides of piston would transfer heat energy due to combustion to linear motion. This motion would convert to rotary motion through the designed mechanism connected to connecting rod. Connecting rod operation was analyzed to evaluate applied stress in 3000, 4500 and 6000 rpm. Boundary conditions including generated pressure in each side of cylinder in these 3 situations was calculated.Keywords: combustion engine, design, finite element method, modeling
Procedia PDF Downloads 5134640 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 2104639 Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control
Authors: Oliver Ohneiser, Francesca De Crescenzio, Gianluca Di Flumeri, Jan Kraemer, Bruno Berberian, Sara Bagassi, Nicolina Sciaraffa, Pietro Aricò, Gianluca Borghini, Fabio Babiloni
Abstract:
An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.Keywords: automation, human factors, air traffic controller, MINIMA, OOTL (Out-Of-The-Loop), EEG (Electroencephalography), HMI (Human Machine Interface)
Procedia PDF Downloads 3834638 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 414637 Rheology and Structural Arrest of Dense Dairy Suspensions: A Soft Matter Approach
Authors: Marjan Javanmard
Abstract:
The rheological properties of dairy products critically depend on the underlying organisation of proteins at multiple length scales. When heated and acidified, milk proteins form particle gel that is viscoelastic, solvent rich, ‘soft’ material. In this work recent developments on the rheology of soft particles suspensions were used to interpret and potentially define the properties of dairy gel structures. It is discovered that at volume fractions below random close packing (RCP), the Maron-Pierce-Quemada (MPQ) model accurately predicts the viscosity of the dairy gel suspensions without fitting parameters; the MPQ model has been shown previously to provide reasonable predictions of the viscosity of hard sphere suspensions from the volume fraction, solvent viscosity and RCP. This surprising finding demonstrates that up to RCP, the dairy gel system behaves as a hard sphere suspension and that the structural aggregates behave as discrete particulates akin to what is observed for microgel suspensions. At effective phase volumes well above RCP, the system is a soft solid. In this region, it is discovered that the storage modulus of the sheared AMG scales with the storage modulus of the set gel. The storage modulus in this regime is reasonably well described as a function of effective phase volume by the Evans and Lips model. Findings of this work has potential to aid in rational design and control of dairy food structure-properties.Keywords: dairy suspensions, rheology-structure, Maron-Pierce-Quemada Model, Evans and Lips Model
Procedia PDF Downloads 2194636 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements
Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono
Abstract:
The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement
Procedia PDF Downloads 2814635 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 2704634 CFD Analysis of the Blood Flow in Left Coronary Bifurcation with Variable Angulation
Authors: Midiya Khademi, Ali Nikoo, Shabnam Rahimnezhad Baghche Jooghi
Abstract:
Cardiovascular diseases (CVDs) are the main cause of death globally. Most CVDs can be prevented by avoiding habitual risk factors. Separate from the habitual risk factors, there are some inherent factors in each individual that can increase the risk potential of CVDs. Vessel shapes and geometry are influential factors, having great impact on the blood flow and the hemodynamic behavior of the vessels. In the present study, the influence of bifurcation angle on blood flow characteristics is studied. In order to approach this topic, by simplifying the details of the bifurcation, three models with angles 30°, 45°, and 60° were created, then by using CFD analysis, the response of these models for stable flow and pulsatile flow was studied. In the conducted simulation in order to eliminate the influence of other geometrical factors, only the angle of the bifurcation was changed and other parameters remained constant during the research. Simulations are conducted under dynamic and stable condition. In the stable flow simulation, a steady velocity of 0.17 m/s at the inlet plug was maintained and in dynamic simulations, a typical LAD flow waveform is implemented. The results show that the bifurcation angle has an influence on the maximum speed of the flow. In the stable flow condition, increasing the angle lead to decrease the maximum flow velocity. In the dynamic flow simulations, increasing the bifurcation angle lead to an increase in the maximum velocity. Since blood flow has pulsatile characteristics, using a uniform velocity during the simulations can lead to a discrepancy between the actual results and the calculated results.Keywords: coronary artery, cardiovascular disease, bifurcation, atherosclerosis, CFD, artery wall shear stress
Procedia PDF Downloads 1644633 The Assessment of Natural Ventilation Performance for Thermal Comfort in Educational Space: A Case Study of Design Studio in the Arab Academy for Science and Technology, Alexandria
Authors: Alaa Sarhan, Rania Abd El Gelil, Hana Awad
Abstract:
Through the last decades, the impact of thermal comfort on the working performance of users and occupants of an indoor space has been a concern. Research papers concluded that natural ventilation quality directly impacts the levels of thermal comfort. Natural ventilation must be put into account during the design process in order to improve the inhabitant's efficiency and productivity. One example of daily long-term occupancy spaces is educational facilities. Many individuals spend long times receiving a considerable amount of knowledge, and it takes additional time to apply this knowledge. Thus, this research is concerned with user's level of thermal comfort in design studios of educational facilities. The natural ventilation quality in spaces is affected by a number of parameters including orientation, opening design, and many other factors. This research aims to investigate the conscious manipulation of the physical parameters of the spaces and its impact on natural ventilation performance which subsequently affects thermal comfort of users. The current research uses inductive and deductive methods to define natural ventilation design considerations, which are used in a field study in a studio in the university building in Alexandria (AAST) to evaluate natural ventilation performance through analyzing and comparing the current case to the developed framework and conducting computational fluid dynamics simulation. Results have proved that natural ventilation performance is successful by only 50% of the natural ventilation design framework; these results are supported by CFD simulation.Keywords: educational buildings, natural ventilation, , mediterranean climate, thermal comfort
Procedia PDF Downloads 2224632 A Dual Spark Ignition Timing Influence for the High Power Aircraft Radial Engine Using a CFD Transient Modeling
Authors: Tytus Tulwin, Ksenia Siadkowska, Rafał Sochaczewski
Abstract:
A high power radial reciprocating engine is characterized by a large displacement volume of a combustion chamber. Choosing the right moment for ignition is important for a high performance or high reliability and ignition certainty. This work shows methods of simulating ignition process and its impact on engine parameters. For given conditions a flame speed is limited when a deflagration combustion takes place. Therefore, a larger length scale of the combustion chamber compared to a standard size automotive engine makes combustion take longer time to propagate. In order to speed up the mixture burn-up time the second spark is introduced. The transient Computational Fluid Dynamics model capable of simulating multicycle engine processes was developed. The CFD model consists of ECFM-3Z combustion and species transport models. A relative ignition timing difference for the both spark sources is constant. The temperature distribution on engine walls was calculated in the separate conjugate heat transfer simulation. The in-cylinder pressure validation was performed for take-off power flight conditions. The influence of ignition timing on parameters like in-cylinder temperature or rate of heat release was analyzed. The most advantageous spark timing for the highest power output was chosen. The conditions around the spark plug locations for the pre-ignition period were analyzed. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: CFD, combustion, ignition, simulation, timing
Procedia PDF Downloads 2964631 Comparison between Experimental and Numerical Studies of Fully Encased Composite Columns
Authors: Md. Soebur Rahman, Mahbuba Begum, Raquib Ahsan
Abstract:
Composite column is a structural member that uses a combination of structural steel shapes, pipes or tubes with or without reinforcing steel bars and reinforced concrete to provide adequate load carrying capacity to sustain either axial compressive loads alone or a combination of axial loads and bending moments. Composite construction takes the advantages of the speed of construction, light weight and strength of steel, and the higher mass, stiffness, damping properties and economy of reinforced concrete. The most usual types of composite columns are the concrete filled steel tubes and the partially or fully encased steel profiles. Fully encased composite column (FEC) provides compressive strength, stability, stiffness, improved fire proofing and better corrosion protection. This paper reports experimental and numerical investigations of the behaviour of concrete encased steel composite columns subjected to short-term axial load. In this study, eleven short FEC columns with square shaped cross section were constructed and tested to examine the load-deflection behavior. The main variables in the test were considered as concrete compressive strength, cross sectional size and percentage of structural steel. A nonlinear 3-D finite element (FE) model has been developed to analyse the inelastic behaviour of steel, concrete, and longitudinal reinforcement as well as the effect of concrete confinement of the FEC columns. FE models have been validated against the current experimental study conduct in the laboratory and published experimental results under concentric load. It has been observed that FE model is able to predict the experimental behaviour of FEC columns under concentric gravity loads with good accuracy. Good agreement has been achieved between the complete experimental and the numerical load-deflection behaviour in this study. The capacities of each constituent of FEC columns such as structural steel, concrete and rebar's were also determined from the numerical study. Concrete is observed to provide around 57% of the total axial capacity of the column whereas the steel I-sections contributes to the rest of the capacity as well as ductility of the overall system. The nonlinear FE model developed in this study is also used to explore the effect of concrete strength and percentage of structural steel on the behaviour of FEC columns under concentric loads. The axial capacity of FEC columns has been found to increase significantly by increasing the strength of concrete.Keywords: composite, columns, experimental, finite element, fully encased, strength
Procedia PDF Downloads 2904630 Monitoring of Quantitative and Qualitative Changes in Combustible Material in the Białowieża Forest
Authors: Damian Czubak
Abstract:
The Białowieża Forest is a very valuable natural area, included in the World Natural Heritage at UNESCO, where, due to infestation by the bark beetle (Ips typographus), norway spruce (Picea abies) have deteriorated. This catastrophic scenario led to an increase in fire danger. This was due to the occurrence of large amounts of dead wood and grass cover, as light penetrated to the bottom of the stands. These factors in a dry state are materials that favour the possibility of fire and the rapid spread of fire. One of the objectives of the study was to monitor the quantitative and qualitative changes of combustible material on the permanent decay plots of spruce stands from 2012-2022. In addition, the size of the area with highly flammable vegetation was monitored and a classification of the stands of the Białowieża Forest by flammability classes was made. The key factor that determines the potential fire hazard of a forest is combustible material. Primarily its type, quantity, moisture content, size and spatial structure. Based on the inventory data on the areas of forest districts in the Białowieża Forest, the average fire load and its changes over the years were calculated. The analysis was carried out taking into account the changes in the health status of the stands and sanitary operations. The quantitative and qualitative assessment of fallen timber and fire load of ground cover used the results of the 2019 and 2021 inventories. Approximately 9,000 circular plots were used for the study. An assessment was made of the amount of potential fuel, understood as ground cover vegetation and dead wood debris. In addition, monitoring of areas with vegetation that poses a high fire risk was conducted using data from 2019 and 2021. All sub-areas were inventoried where vegetation posing a specific fire hazard represented at least 10% of the area with species characteristic of that cover. In addition to the size of the area with fire-prone vegetation, a very important element is the size of the fire load on the indicated plots. On representative plots, the biomass of the land cover was measured on an area of 10 m2 and then the amount of biomass of each component was determined. The resulting element of variability of ground covers in stands was their flammability classification. The classification developed made it possible to track changes in the flammability classes of stands over the period covered by the measurements.Keywords: classification, combustible material, flammable vegetation, Norway spruce
Procedia PDF Downloads 934629 Using a GIS-Based Method for Green Infrastructure Accessibility of Different Socio-Economic Groups in Auckland, New Zealand
Authors: Jing Ma, Xindong An
Abstract:
Green infrastructure, the most important aspect of improving the quality of life, has been a crucial element of the liveability measurement. With demanding of more liveable urban environment from increasing population in city area, access to green infrastructure in walking distance should be taken into consideration. This article exemplifies the study on accessibility measurement of green infrastructure in central Auckland (New Zealand), using network analysis tool on the basis of GIS, to verify the accessibility levels of green infrastructure. It analyses the overall situation of green infrastructure and draws some conclusions on the city’s different levels of accessibility according to the categories and facilities distribution, which provides valuable references and guidance for the future facility improvement in planning strategies.Keywords: quality of life, green infrastructure, GIS, accessibility
Procedia PDF Downloads 2824628 Application of De Novo Programming Approach for Optimizing the Business Process
Authors: Z. Babic, I. Veza, A. Balic, M. Crnjac
Abstract:
The linear programming model is sometimes difficult to apply in real business situations due to its assumption of proportionality. This paper shows an example of how to use De Novo programming approach instead of linear programming. In the De Novo programming, resources are not fixed like in linear programming but resource quantities depend only on available budget. Budget is a new, important element of the De Novo approach. Two different production situations are presented: increasing costs and quantity discounts of raw materials. The focus of this paper is on advantages of the De Novo approach in the optimization of production plan for production company which produces souvenirs made from famous stone from the island of Brac, one of the greatest islands from Croatia.Keywords: business process, De Novo programming, optimizing, production
Procedia PDF Downloads 2224627 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain
Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA
Abstract:
In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.Keywords: BER, DWT, extreme leaning machine (ELM), PSNR
Procedia PDF Downloads 3114626 Circular Raft Footings Strengthened by Stone Columns under Static Loads
Authors: R. Ziaie Moayed, B. Mohammadi-Haji
Abstract:
Stone columns have been widely employed to improve the load-settlement characteristics of soft soils. The results of two small scale displacement control loading tests on stone columns were used in order to validate numerical finite element simulations. Additionally, a series of numerical calculations of static loading have been performed on strengthened raft footing to investigate the effects of using stone columns on bearing capacity of footings. The bearing capacity of single and group of stone columns under static loading compares with unimproved ground.Keywords: circular raft footing, numerical analysis, validation, vertically encased stone column
Procedia PDF Downloads 3114625 Combined Effect of Moving and Open Boundary Conditions in the Simulation of Inland Inundation Due to Far Field Tsunami
Authors: M. Ashaque Meah, Md. Fazlul Karim, M. Shah Noor, Nazmun Nahar Papri, M. Khalid Hossen, M. Ismoen
Abstract:
Tsunami and inundation modelling due to far field tsunami propagation in a limited area is a very challenging numerical task because it involves many aspects such as the formation of various types of waves and the irregularities of coastal boundaries. To compute the effect of far field tsunami and extent of inland inundation due to far field tsunami along the coastal belts of west coast of Malaysia and Southern Thailand, a formulated boundary condition and a moving boundary condition are simultaneously used. In this study, a boundary fitted curvilinear grid system is used in order to incorporate the coastal and island boundaries accurately as the boundaries of the model domain are curvilinear in nature and the bending is high. The tsunami response of the event 26 December 2004 along the west open boundary of the model domain is computed to simulate the effect of far field tsunami. Based on the data of the tsunami source at the west open boundary of the model domain, a boundary condition is formulated and applied to simulate the tsunami response along the coastal and island boundaries. During the simulation process, a moving boundary condition is initiated instead of fixed vertical seaside wall. The extent of inland inundation and tsunami propagation pattern are computed. Some comparisons are carried out to test the validation of the simultaneous use of the two boundary conditions. All simulations show excellent agreement with the data of observation.Keywords: open boundary condition, moving boundary condition, boundary-fitted curvilinear grids, far-field tsunami, shallow water equations, tsunami source, Indonesian tsunami of 2004
Procedia PDF Downloads 4464624 The Effects of Inferior Tilt Fixation on a Glenoid Components in Reverse Shoulder-Arthroplasty
Authors: Soo Min Kim, Soo-Won Chae, Soung-Yon Kim, Haea Lee, Ju Yong Kang, Juneyong Lee, Seung-Ho Han
Abstract:
Reverse total shoulder arthroplasty (RTSA) has become an effective treatment option for cuff tear arthropathy and massive, irreparable rotator cuff tears and indications for its use are expanding. Numerous methods for optimal fixation of the glenoid component have been suggested, such as inferior overhang, inferior tilt, to maximize initial fixation and prevent glenoid component loosening. The inferior tilt fixation of a glenoid component has been suggested, which is expected to decrease scapular notching and to improve the stability of a glenoid component fixation in reverse total shoulder arthroplasty. Inferior tilt fixation of the glenoid component has been suggested, which can improve stability and, because it provides the most uniform compressive forces and imparts the least amount of tensile forces and micromotion, reduce the likelihood of mechanical failure. Another study reported that glenoid component inferior tilt improved impingement-free range of motion as well as minimized the scapular notching. Several authors have shown that inferior tilt of a glenoid component reduces scapular notching. However, controversy still exists regarding its importance in the literature. In this study the influence of inferior tilt fixation on the primary stability of a glenoid component has been investigated. Finite element models were constructed from cadaveric scapulae and glenoid components were implanted with neutral and 10° inferior tilts. Most previous biomechanical studies regarding the effect of glenoid component inferior tilt used a solid rigid polyurethane foam or sawbones block, not cadaveric scapulae, to evaluate the stability of the RTSA. Relative micromotions at the bone-glenoid component interface, and the distribution of bone stresses under the glenoid component and around the screws were analyzed and compared between neutral and 10° inferior tilt groups. Contact area between bone and screws and cut surface area of the cancellous bone exposed after reaming of the glenoid have also been investigated because of the fact that cancellous and cortical bone thickness vary depending on the resection level of the inferior glenoid bone. The greater relative micromotion of the bone-glenoid component interface occurred in the 10° inferior tilt group than in the neutral tilt group, especially at the inferior area of the bone-glenoid component interface. Bone stresses under the glenoid component and around the screws were also higher in the 10° inferior tilt group than in the neutral tilt group, especially at the inferior third of the glenoid bone surface under the glenoid component and inferior scapula. Thus inferior tilt fixation of the glenoid component may adversely affect the primary stability and longevity of the reverse total shoulder arthroplasty.Keywords: finite element analysis, glenoid component, inferior tilt, reverse total shoulder arthroplasty
Procedia PDF Downloads 2864623 Evaluation of Modulus of Elasticity by Non-Destructive Method of Hybrid Fiber Reinforced Concrete
Authors: Erjola Reufi, Thomas Beer
Abstract:
Plain, unreinforced concrete is a brittle material, with a low tensile strength, limited ductility and little resistance to cracking. In order to improve the inherent tensile strength of concrete there is a need of multi directional and closely spaced reinforcement, which can be provided in the form of randomly distributed fibers. Fiber reinforced concrete (FRC) is a composite material consisting of cement, sand, coarse aggregate, water and fibers. In this composite material, short discrete fibers are randomly distributed throughout the concrete mass. The behavioral efficiency of this composite material is far superior to that of plain concrete and many other construction materials of equal cost. The present experimental study considers the effect of steel fibers and polypropylene fiber on the modulus of elasticity of concrete. Hook end steel fibers of length 5 cm and 3 cm at volume fraction of 0.25%, 0.5% and 1.% were used. Also polypropylene fiber of length 12, 6, 3 mm at volume fraction 0.1, 0.25, and 0.4 % were used. Fifteen mixtures has been prepared to evaluate the effect of fiber on modulus of elasticity of concrete. Ultrasonic pulse velocity (UPV) and resonant frequency methods which are two non-destructive testing techniques have been used to measure the elastic properties of fiber reinforced concrete. This study found that ultrasonic wave propagation is the most reliable, easy and cost effective testing technique to use in the determination of the elastic properties of the FRC mix used in this study.Keywords: fiber reinforced concrete(FRC), polypropylene fiber, resonance, ultrasonic pulse velocity, steel fiber
Procedia PDF Downloads 3024622 Roll Forming Process and Die Design for a Large Size Square Tube
Authors: Jinn-Jong Sheu, Cang-Fu Liang, Cheng-Hsien Yu
Abstract:
This paper proposed the cold roll forming process and the die design methods for a 400mm by 400 mm square tube with 16 mm in thickness. The tubular blank made by cold roll forming is 508mm in diameter. The square tube roll forming process was designed considering the layout of rolls and the compression ratio distribution for each stand. The final tube corner radius and the edge straightness in the front end of the tube are to be controlled according to the tube specification. A five-stand forming design using four rolls at each stand was proposed to establish the base reference of square tube roll forming quality. Different numbers of pass and roll designs were proposed and compared to the base design in order to find the feasibility of increase pass number to improve the square tube quality. The proposed roll forming processes were simulated using FEM analysis. The thickness variations of the corner and the edge areas were examined. The maximum loads and the torques of each stand were calculated to study the power consumption of the roll forming machine. The simulation results showed the square tube thickness variations and concavity of the edge are acceptable with the JIS tube specifications for the base design. But the maximum loads and torques are very high. By changing the layout and the number of the rolls were able to obtain better tube geometry and decrease the maximum load and torque of each stand. This paper had shown the feasibility of designing the roll forming process and the layout of dies using FEM simulation. The obtained information is helpful to the roll forming machine design for a large size square tube making.Keywords: cold roll forming, FEM analysis, roll forming die design, tube roll forming
Procedia PDF Downloads 3114621 Applying the CA Systems in Education Process
Authors: A. Javorova, M. Matusova, K. Velisek
Abstract:
The article summarizes the experience of laboratory technical subjects teaching methodologies using a number of software products. The main aim is to modernize the teaching process in accordance with the requirements of today - based on information technology. Increasing of the study attractiveness and effectiveness is due to the introduction of CA technologies in the learning process. This paper discussed the areas where individual CA system used. Environment using CA systems are briefly presented in each chapter.Keywords: education, CA systems, simulation, technology
Procedia PDF Downloads 3964620 First Principle-Based Dft and Microkinetic Simulation of Co-Conversion of Carbon Dioxide and Methane on Single Iridium Atom Doped Hematite with Surface Oxygen Defect
Authors: Kefale W. Yizengaw, Delele Worku Ayele, Jyh-Chiang Jiang
Abstract:
The catalytic co-conversion of CO₂ and CH₄ to value-added compounds has become one of the promising approaches to addressing global climate change by having valuable fossil fuels. Thedirect co-conversion of CO₂ and CH₄ to value-added compounds is attractive but tremendously challenging because of both molecules' thermodynamic stability and kinetic inertness. In the present study, a single iridium atom doped and a single oxygen atom defect hematite (110)surface model catalyst, which can comprehend direct C–O coupling based on simultaneous activation of CO2 and CH4 was studied using density functional theory plus U (DFT + U)calculations. The presence of dual active sites on the Ir/Fe₂O₃(110)-OV surface catalyst enablesCO₂ activation on the Ir site and CH₄ activation at the defect site. The electron analysis for the theco-adsorption of CO₂ and CH₄ deals with the electron redistribution on the surface and clearly shows the synergistic effect for simultaneous CO₂ and CH₄ activation on Ir/α- Fe₂O₃(110)-OVsurface. The microkinetic analysis shows that the dissociation of CH4 to CH3 * and H* plays an excellent role in the C–O coupling. The coverage analysis for the intermediate products of the microkinetic simulation results indicates that C–O coupling is the reaction limiting step. Finally, after the CH₃O* intermediate product species is produced, the radical hydrogen species spontaneously diffuse to the CH3O* intermediate product to form methanol at around 490 [K]. The present work provides mechanistic and kinetic insights into the direct C–O coupling of CO₂and CH₄, which could help design more-efficient catalysts.Keywords: co-conversion, C–O coupling, doping, oxygen vacancy, microkinetic
Procedia PDF Downloads 1154619 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 2624618 Solving Stochastic Eigenvalue Problem of Wick Type
Authors: Hassan Manouzi, Taous-Meriem Laleg-Kirati
Abstract:
In this paper we study mathematically the eigenvalue problem for stochastic elliptic partial differential equation of Wick type. Using the Wick-product and the Wiener-Ito chaos expansion, the stochastic eigenvalue problem is reformulated as a system of an eigenvalue problem for a deterministic partial differential equation and elliptic partial differential equations by using the Fredholm alternative. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated.Keywords: eigenvalue problem, Wick product, SPDEs, finite element, Wiener-Ito chaos expansion
Procedia PDF Downloads 3594617 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy
Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny
Abstract:
Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy
Procedia PDF Downloads 874616 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program
Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun
Abstract:
A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system
Procedia PDF Downloads 2004615 Comparison of the Effect of Strand Diameters, Providing Beam to Column Connection
Authors: Mustafa Kaya
Abstract:
In this study, the effect of pre-stressed strand diameters, providing the beam-to-column connections, was investigated from both experimental, and analytical aspects. In the experimental studies, the strength and stiffness, the capacities of the precast specimens were compared. The precast specimen with strands of 15.24 mm reached an equal strength of the reference specimen. Parallel results were obtained during the analytical studies from the aspects of strength, and behavior, but in terms of stiffness, it was seen that the initial stiffness of the analytical models was lower than that of the tested specimen.Keywords: post-tensioned connections, beam to column connections, finite element method, strand diameter
Procedia PDF Downloads 3344614 Thermal Performance of an Air Heating Storing System
Authors: Mohammed A. Elhaj, Jamal S. Yassin
Abstract:
Owing to the lack of synchronization between the solar energy availability and the heat demands in a specific application, the energy storing sub-system is necessary to maintain the continuity of thermal process. The present work is dealing with an active solar heating storing system in which an air solar collector is connected to storing unit where this energy is distributed and provided to the heated space in a controlled manner. The solar collector is a box type absorber where the air flows between a number of vanes attached between the collector absorber and the bottom plate. This design can improve the efficiency due to increasing the heat transfer area exposed to the flowing air, as well as the heat conduction through the metal vanes from the top absorbing surface. The storing unit is a packed bed type where the air is coming from the air collector and circulated through the bed in order to add/remove the energy through the charging / discharging processes, respectively. The major advantage of the packed bed storage is its high degree of thermal stratification. Numerical solution of the packed bed energy storage is considered through dividing the bed into a number of equal segments for the bed particles and solved the energy equation for each segment depending on the neighbor ones. The studied design and performance parameters in the developed simulation model including, particle size, void fraction, etc. The final results showed that the collector efficiency was fluctuated between 55%-61% in winter season (January) under the climatic conditions of Misurata in Libya. Maximum temperature of 52ºC is attained at the top of the bed while the lower one is 25ºC at the end of the charging process of hot air into the bed. This distribution can satisfy the required load for the most house heating in Libya.Keywords: solar energy, thermal process, performance, collector, packed bed, numerical analysis, simulation
Procedia PDF Downloads 331