Search results for: transient simulation
485 Reacting Numerical Simulation of Axisymmetric Trapped Vortex Combustors for Methane, Propane and Hydrogen
Authors: Heval Serhat Uluk, Sam M. Dakka, Kuldeep Singh, Richard Jefferson-Loveday
Abstract:
The carbon footprint of the aviation sector in total measured 3.8% in 2017, and it is expected to triple by 2050. New combustion approaches and fuel types are necessary to prevent this. This paper will focus on using propane, methane, and hydrogen as fuel replacements for kerosene and implement a trapped vortex combustor design to increase efficiency. Reacting simulations were conducted for axisymmetric trapped vortex combustor to investigate the static pressure drop, combustion efficiency and pattern factor for various cavity aspect ratios for 0.3, 0.6 and 1 and air mass flow rates for 14 m/s, 28 m/s and 42 m/s. Propane, methane and hydrogen are used as alternative fuels. The combustion model was anchored based on swirl flame configuration with an emphasis on high fidelity of boundary conditions with favorable results of eddy dissipation model implementation. Reynolds Averaged Navier Stokes (RANS) k-ε model turbulence model for the validation effort was used for turbulence modelling. A grid independence study was conducted for the three-dimensional model to reduce computational time. Preliminary results for 24 m/s air mass flow rate provided a close temperature profile inside the cavity relative to the experimental study. The investigation will be carried out on the effect of air mass flow rates and cavity aspect ratio on the combustion efficiency, pattern factor and static pressure drop in the combustor. A comparison study among pure methane, propane and hydrogen will be conducted to investigate their suitability for trapped vortex combustors and conclude their advantages and disadvantages as a fuel replacement. Therefore, the study will be one of the milestones to achieving 2050 zero carbon emissions or reducing carbon emissions.Keywords: computational fluid dynamics, aerodynamic, aerospace, propulsion, trapped vortex combustor
Procedia PDF Downloads 90484 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach
Authors: Nwachukwu Ifeanyi
Abstract:
Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.Keywords: computation, robotics, mathematics, simulation
Procedia PDF Downloads 58483 Modelling Social Influence and Cultural Variation in Global Low-Carbon Vehicle Transitions
Authors: Hazel Pettifor, Charlie Wilson, David Mccollum, Oreane Edelenbosch
Abstract:
Vehicle purchase is a technology adoption decision that will strongly influence future energy and emission outcomes. Global integrated assessment models (IAMs) provide valuable insights into the medium and long terms effects of socio-economic development, technological change and climate policy. In this paper we present a unique and transparent approach for improving the behavioural representation of these models by incorporating social influence effects to more accurately represent consumer choice. This work draws together strong conceptual thinking and robust empirical evidence to introduce heterogeneous and interconnected consumers who vary in their aversion to new technologies. Focussing on vehicle choice, we conduct novel empirical research to parameterise consumer risk aversion and how this is shaped by social and cultural influences. We find robust evidence for social influence effects, and variation between countries as a function of cultural differences. We then formulate an approach to modelling social influence which is implementable in both simulation and optimisation-type models. We use two global integrated assessment models (IMAGE and MESSAGE) to analyse four scenarios that introduce social influence and cultural differences between regions. These scenarios allow us to explore the interactions between consumer preferences and social influence. We find that incorporating social influence effects into global models accelerates the early deployment of electric vehicles and stimulates more widespread deployment across adopter groups. Incorporating cultural variation leads to significant differences in deployment between culturally divergent regions such as the USA and China. Our analysis significantly extends the ability of global integrated assessment models to provide policy-relevant analysis grounded in real-world processes.Keywords: behavioural realism, electric vehicles, social influence, vehicle choice
Procedia PDF Downloads 187482 Nutrigenetic and Bioinformatic Analysis of Rice Bran Bioactives for the Treatment of Lifestyle Related Disease Diabetes and Hypertension
Authors: Md. Alauddin, Md. Ruhul Amin, Md. Omar Faruque, Muhammad Ali Siddiquee, Zakir Hossain Howlader, Mohammad Asaduzzaman
Abstract:
Diabetes and hypertension are the major lifestyle related diseases. The α-amylase and angiotensin converting enzymes (ACE) are the key enzymes that regulate diabetes and hypertension. The aim was to develop a drug for the treatment of diabetes and hypertension. The Rice Bran (RB) sample (Oryza sativa; BRRI-Dhan-84) was collected from the Bangladesh Rice Research Institute (BRRI), and rice bran proteins were isolated and hydrolyzed by hydrolyzing enzyme alcalase and trypsin. In vivo experiment suggested that rice bran bioactives has an effect on regulating the expression of several key gluconeogenesis and lipogenesis-regulating genes, such as glucose-6-phosphatase, phosphoenolpyruvate carboxykinase, and fatty acid synthase. The above genes have a connection of regulating the glucose level, lipids profile as well as act as an anti-inflammatory agent. A molecular docking, bioinformatics and in vitro experiments were performed. We found rice bran protein hydrolysates significantly (<0.05) influence the peptide concentration in the case of trypsin, alcalase, and (trypsin + alcalase) digestion. The in vitro analysis found that protein hydrolysate significantly (<0.05) reduced diabetic and hypertension as well as oxidative stress. A molecular docking study showed that the YY and IP peptide have a significantly strong binding affinity to the active site of the ACE enzyme and α-amylase with -7.8Kcal/mol and -6.2Kcal/mol, respectively. The Molecular dynamics (MD) simulation and Swiss ADME data analysis showed that less toxicity risk, good physicochemical properties, pharmacokinetics, and drug-likeness with drug scores 0.45 and 0.55 of YY and IP peptides, respectively. Thus, rice bran bioactive could be a good candidate for the treatment of diabetes and hypertension.Keywords: anti-hypertensive and anti-hyperglycemic, anti-oxidative, bioinformatics, in vitro study, rice bran proteins and peptides
Procedia PDF Downloads 61481 Particle Swarm Optimization Based Vibration Suppression of a Piezoelectric Actuator Using Adaptive Fuzzy Sliding Mode Controller
Authors: Jin-Siang Shaw, Patricia Moya Caceres, Sheng-Xiang Xu
Abstract:
This paper aims to integrate the particle swarm optimization (PSO) method with the adaptive fuzzy sliding mode controller (AFSMC) to achieve vibration attenuation in a piezoelectric actuator subject to base excitation. The piezoelectric actuator is a complicated system made of ferroelectric materials and its performance can be affected by nonlinear hysteresis loop and unknown system parameters and external disturbances. In this study, an adaptive fuzzy sliding mode controller is proposed for the vibration control of the system, because the fuzzy sliding mode controller is designed to tackle the unknown parameters and external disturbance of the system, and the adaptive algorithm is aimed for fine-tuning this controller for error converging purpose. Particle swarm optimization method is used in order to find the optimal controller parameters for the piezoelectric actuator. PSO starts with a population of random possible solutions, called particles. The particles move through the search space with dynamically adjusted speed and direction that change according to their historical behavior, allowing the values of the particles to quickly converge towards the best solutions for the proposed problem. In this paper, an initial set of controller parameters is applied to the piezoelectric actuator which is subject to resonant base excitation with large amplitude vibration. The resulting vibration suppression is about 50%. Then PSO is applied to search for an optimal controller in the neighborhood of this initial controller. The performance of the optimal fuzzy sliding mode controller found by PSO indeed improves up to 97.8% vibration attenuation. Finally, adaptive version of fuzzy sliding mode controller is adopted for further improving vibration suppression. Simulation result verifies the performance of the adaptive controller with 99.98% vibration reduction. Namely the vibration of the piezoelectric actuator subject to resonant base excitation can be completely annihilated using this PSO based adaptive fuzzy sliding mode controller.Keywords: adaptive fuzzy sliding mode controller, particle swarm optimization, piezoelectric actuator, vibration suppression
Procedia PDF Downloads 146480 FEM Simulation of Tool Wear and Edge Radius Effects on Residual Stress in High Speed Machining of Inconel718
Authors: Yang Liu, Mathias Agmell, Aylin Ahadi, Jan-Eric Stahl, Jinming Zhou
Abstract:
Tool wear and tool geometry have significant effects on the residual stresses in the component produced by high-speed machining. In this paper, Coupled Eulerian and Lagrangian (CEL) model is adopted to investigate the residual stress in high-speed machining of Inconel718 with a CBN170 cutting tool. The result shows that the mesh with the smallest size of 5 um yields cutting forces and chip morphology in close agreement with the experimental data. The analysis of thermal loading and mechanical loading are performed to study the effect of segmented chip morphology on the machined surface topography and residual stress distribution. The effects of cutting edge radius and flank wear on residual stresses formation and distribution on the workpiece were also investigated. It is found that the temperature within 100um depth of the machined surface increases drastically due to the more friction heat generation with the contact area of tool and workpiece increasing when a larger edge radius and flank wear are used. With the depth further increasing, the temperature drops rapidly for all cases due to the low conductivity of Inconel718. Consequently, higher and deeper tensile residual stress is generated on the superficial. Furthermore, an increased depth of plastic deformation and compressive residual stress is noticed in the subsurface, which is attributed to the reduction of the yield strength under the thermal effect. Besides, the ploughing effect produced by a larger tool edge radius contributes more than flank wear. The magnitude variation of the compressive residual stress caused by various edge radius and flank wear have a totally opposite trend, which depends on the magnitude of the ploughing and friction pressure acting on the machined surface.Keywords: Coupled Eulerian Lagrangian, segmented chip, residual stress, tool wear, edge radius, Inconel718
Procedia PDF Downloads 146479 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action
Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere
Abstract:
Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results
Procedia PDF Downloads 133478 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport
Authors: Marcel Schneider, Nils Nießen
Abstract:
Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations
Procedia PDF Downloads 328477 A Review of Benefit-Risk Assessment over the Product Lifecycle
Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris
Abstract:
Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches
Procedia PDF Downloads 154476 Practical Skill Education for Doctors in Training: Economical and Efficient Methods for Students to Receive Hands-on Experience
Authors: Nathaniel Deboever, Malcolm Breeze, Adrian Sheen
Abstract:
Basic surgical and suturing techniques are a fundamental requirement for all doctors. In order to gain confidence and competence, doctors in training need to obtain sufficient teaching and just as importantly: practice. Young doctors with an apt level of expertise on these simple surgical skills, which are often used in the Emergency Department, can help alleviate some pressure during a busy evening. Unfortunately, learning these skills can be quite difficult during medical school or even during junior doctor years. The aim of this project was to adequately train medical students attending University of Sydney’s Nepean Clinical School through a series of workshops highlighting practical skills, with hopes to further extend this program to junior doctors in the hospital. The sessions instructed basic skills via tutorials, demonstrations, and lastly, the sessions cemented these proficiencies with practical sessions. During such an endeavor, it is fundamental to employ models that appropriately resemble what students will encounter in the clinical setting. The sustainability of workshops is similarly important to the continuity of such a program. To address both these challenges, the authors have developed models including suturing platforms, knot tying, and vessel ligation stations, as well as a shave and punch biopsy models and ophthalmologic foreign body device. The unique aspect of this work is that we utilized hands-on teaching sessions, to address a gap in doctors-in-training and junior doctor curriculum. Presented to you through this poster are our approaches to creating models that do not employ animal products and therefore do not necessitate particular facilities or discarding requirements. Covering numerous skills that would be beneficial to all young doctors, these models are easily replicable and affordable. This exciting work allows for countless sessions at low cost, providing enough practice for students to perform these skills confidently as it has been shown through attendee questionnaires.Keywords: medical education, surgical models, surgical simulation, surgical skills education
Procedia PDF Downloads 157475 Bimetallic MOFs Based Membrane for the Removal of Heavy Metal Ions from the Industrial Wastewater
Authors: Muhammad Umar Mushtaq, Muhammad Bilal Khan Niazi, Nouman Ahmad, Dooa Arif
Abstract:
Apart from organic dyes, heavy metals such as Pb, Ni, Cr, and Cu are present in textile effluent and pose a threat to humans and the environment. Many studies on removing heavy metallic ions from textile wastewater have been conducted in recent decades using metal-organic frameworks (MOFs). In this study new polyether sulfone ultrafiltration membrane, modified with Cu/Co and Cu/Zn-based bimetal-organic frameworks (MOFs), was produced. Phase inversion was used to produce the membrane, and atomic force microscopy (AFM), scanning electron microscopy (SEM) were used to characterize it. The bimetallic MOFs-based membrane structure is complex and can be comprehended using characterization techniques. The bimetallic MOF-based filtration membranes are designed to selectively adsorb specific contaminants while allowing the passage of water molecules, improving the ultrafiltration efficiency. MOFs' adsorption capacity and selectivity are enhanced by functionalizing them with particular chemical groups or incorporating them into composite membranes with other materials, such as polymers. The morphology and performance of the bimetallic MOF-based membrane were investigated regarding pure water flux and metal ion rejection. The advantages of developed bimetallic MOFs based membranes for wastewater treatment include enhanced adsorption capacity because of the presence of two metals in their structure, which provides additional binding sites for contaminants, leading to a higher adsorption capacity and more efficient removal of pollutants from wastewater. Based on the experimental findings, bimetallic MOF-based membranes are more capable of rejecting metal ions from industrial wastewater than conventional membranes that have already been developed. Furthermore, the difficulties associated with operational parameters, including pressure gradients and velocity profiles, are simulated using Ansys Fluent software. The simulation results obtained for the operating parameters are in complete agreement with the experimental results.Keywords: bimetallic MOFs, heavy metal ions, industrial wastewater treatment, ultrafiltration.
Procedia PDF Downloads 90474 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities
Authors: Kung-Jen Tu, Danny Vernatha
Abstract:
To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.Keywords: database, electricity sub-meters, energy anomaly detection, sensor
Procedia PDF Downloads 307473 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management
Authors: Berk Ecer, Ebru Akcapinar Sezer
Abstract:
Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach
Procedia PDF Downloads 139472 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures
Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk
Abstract:
From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach
Procedia PDF Downloads 202471 Exploratory Factor Analysis of Natural Disaster Preparedness Awareness of Thai Citizens
Authors: Chaiyaset Promsri
Abstract:
Based on the synthesis of related literatures, this research found thirteen related dimensions that involved the development of natural disaster preparedness awareness including hazard knowledge, hazard attitude, training for disaster preparedness, rehearsal and practice for disaster preparedness, cultural development for preparedness, public relations and communication, storytelling, disaster awareness game, simulation, past experience to natural disaster, information sharing with family members, and commitment to the community (time of living). The 40-item of natural disaster preparedness awareness questionnaire was developed based on these thirteen dimensions. Data were collected from 595 participants in Bangkok metropolitan and vicinity. Cronbach's alpha was used to examine the internal consistency for this instrument. Reliability coefficient was 97, which was highly acceptable. Exploratory Factor Analysis where principal axis factor analysis was employed. The Kaiser-Meyer-Olkin index of sampling adequacy was .973, indicating that the data represented a homogeneous collection of variables suitable for factor analysis. Bartlett's test of Sphericity was significant for the sample as Chi-Square = 23168.657, df = 780, and p-value < .0001, which indicated that the set of correlations in the correlation matrix was significantly different and acceptable for utilizing EFA. Factor extraction was done to determine the number of factors by using principal component analysis and varimax. The result revealed that four factors had Eigen value greater than 1 with more than 60% cumulative of variance. Factor #1 had Eigen value of 22.270, and factor loadings ranged from 0.626-0.760. This factor was named as "Knowledge and Attitude of Natural Disaster Preparedness". Factor #2 had Eigen value of 2.491, and factor loadings ranged from 0.596-0.696. This factor was named as "Training and Development". Factor #3 had Eigen value of 1.821, and factor loadings ranged from 0.643-0.777. This factor was named as "Building Experiences about Disaster Preparedness". Factor #4 had Eigen value of 1.365, and factor loadings ranged from 0.657-0.760. This was named as "Family and Community". The results of this study provided support for the reliability and construct validity of natural disaster preparedness awareness for utilizing with populations similar to sample employed.Keywords: natural disaster, disaster preparedness, disaster awareness, Thai citizens
Procedia PDF Downloads 378470 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact
Authors: Martin Adlington, Boris Ceranic, Sally Shazhad
Abstract:
In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.Keywords: overheating, climate change, thermal comfort, health
Procedia PDF Downloads 351469 Photovoltaic Solar Energy in Public Buildings: A Showcase for Society
Authors: Eliane Ferreira da Silva
Abstract:
This paper aims to mobilize and sensitize public administration leaders to good practices and encourage investment in the PV system in Brazil. It presents a case study methodology for dimensioning the PV system in the roofs of the public buildings of the Esplanade of the Ministries, Brasilia, capital of the country, with predefined resources, starting with the Sustainable Esplanade Project (SEP), of the exponential growth of photovoltaic solar energy in the world and making a comparison with the solar power plant of the Ministry of Mines and Energy (MME), active since: 6/10/2016. In order to do so, it was necessary to evaluate the energy efficiency of the buildings in the period from January 2016 to April 2017, (16 months) identifying the opportunities to reduce electric energy expenses, through the adjustment of contracted demand, the tariff framework and correction of existing active energy. The instrument used to collect data on electric bills was the e-SIC citizen information system. The study considered in addition to the technical and operational aspects, the historical, cultural, architectural and climatic aspects, involved by several actors. Identifying the reductions of expenses, the study directed to the following aspects: Case 1) economic feasibility for exchanges of common lamps, for LED lamps, and, Case 2) economic feasibility for the implementation of photovoltaic solar system connected to the grid. For the case 2, PV*SOL Premium Software was used to simulate several possibilities of photovoltaic panels, analyzing the best performance, according to local characteristics, such as solar orientation, latitude, annual average solar radiation. A simulation of an ideal photovoltaic solar system was made, with due calculations of its yield, to provide a compensation of the energy expenditure of the building - or part of it - through the use of the alternative source in question. The study develops a methodology for public administration, as a major consumer of electricity, to act in a responsible, fiscalizing and incentive way in reducing energy waste, and consequently reducing greenhouse gases.Keywords: energy efficiency, esplanade of ministries, photovoltaic solar energy, public buildings, sustainable building
Procedia PDF Downloads 132468 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor
Authors: Manish Chand, Subhrojit Bagchi, R. Kumar
Abstract:
A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code
Procedia PDF Downloads 163467 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump
Authors: Ravi Verma
Abstract:
Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity
Procedia PDF Downloads 89466 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event
Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette
Abstract:
Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas
Procedia PDF Downloads 77465 Documenting the 15th Century Prints with RTI
Authors: Peter Fornaro, Lothar Schmitt
Abstract:
The Digital Humanities Lab and the Institute of Art History at the University of Basel are collaborating in the SNSF research project ‘Digital Materiality’. Its goal is to develop and enhance existing methods for the digital reproduction of cultural heritage objects in order to support art historical research. One part of the project focuses on the visualization of a small eye-catching group of early prints that are noteworthy for their subtle reliefs and glossy surfaces. Additionally, this group of objects – known as ‘paste prints’ – is characterized by its fragile state of preservation. Because of the brittle substances that were used for their production, most paste prints are heavily damaged and thus very hard to examine. These specific material properties make a photographic reproduction extremely difficult. To obtain better results we are working with Reflectance Transformation Imaging (RTI), a computational photographic method that is already used in archaeological and cultural heritage research. This technique allows documenting how three-dimensional surfaces respond to changing lighting situations. Our first results show that RTI can capture the material properties of paste prints and their current state of preservation more accurately than conventional photographs, although there are limitations with glossy surfaces because the mathematical models that are included in RTI are kept simple in order to keep the software robust and easy to use. To improve the method, we are currently developing tools for a more detailed analysis and simulation of the reflectance behavior. An enhanced analytical model for the representation and visualization of gloss will increase the significance of digital representations of cultural heritage objects. For collaborative efforts, we are working on a web-based viewer application for RTI images based on WebGL in order to make acquired data accessible to a broader international research community. At the ICDH Conference, we would like to present unpublished results of our work and discuss the implications of our concept for art history, computational photography and heritage science.Keywords: art history, computational photography, paste prints, reflectance transformation imaging
Procedia PDF Downloads 275464 Comparison Analysis of Fuzzy Logic Controler Based PV-Pumped Hydro and PV-Battery Storage Systems
Authors: Seada Hussen, Frie Ayalew
Abstract:
Integrating different energy resources, like solar PV and hydro, is used to ensure reliable power to rural communities like Hara village in Ethiopia. Hybrid power system offers power supply for rural villages by providing an alternative supply for the intermittent nature of renewable energy resources. The intermittent nature of renewable energy resources is a challenge to electrifying rural communities in a sustainable manner with solar resources. Major rural villages in Ethiopia are suffering from a lack of electrification, that cause our people to suffer deforestation, travel for long distance to fetch water, and lack good services like clinic and school sufficiently. The main objective of this project is to provide a balanced, stable, reliable supply for Hara village, Ethiopia using solar power with a pumped hydro energy storage system. The design of this project starts by collecting data from villages and taking solar irradiance data from NASA. In addition to this, geographical arrangement and location are also taken into consideration. After collecting this, all data analysis and cost estimation or optimal sizing of the system and comparison of solar with pumped hydro and solar with battery storage system is done using Homer Software. And since solar power only works in the daytime and pumped hydro works at night time and also at night and morning, both load will share to cover the load demand; this need controller designed to control multiple switch and scheduling in this project fuzzy logic controller is used to control this scenario. The result of the simulation shows that solar with pumped hydro energy storage system achieves good results than with a battery storage system since the comparison is done considering storage reliability, cost, storage capacity, life span, and efficiency.Keywords: pumped hydro storage, solar energy, solar PV, battery energy storage, fuzzy logic controller
Procedia PDF Downloads 78463 Applying Computer Simulation Methods to a Molecular Understanding of Flaviviruses Proteins towards Differential Serological Diagnostics and Therapeutic Intervention
Authors: Sergio Alejandro Cuevas, Catherine Etchebest, Fernando Luis Barroso Da Silva
Abstract:
The flavivirus genus has several organisms responsible for generating various diseases in humans. Special in Brazil, Zika (ZIKV), Dengue (DENV) and Yellow Fever (YFV) viruses have raised great health concerns due to the high number of cases affecting the area during the last years. Diagnostic is still a difficult issue since the clinical symptoms are highly similar. The understanding of their common structural/dynamical and biomolecular interactions features and differences might suggest alternative strategies towards differential serological diagnostics and therapeutic intervention. Due to their immunogenicity, the primary focus of this study was on the ZIKV, DENV and YFV non-structural proteins 1 (NS1) protein. By means of computational studies, we calculated the main physical chemical properties of this protein from different strains that are directly responsible for the biomolecular interactions and, therefore, can be related to the differential infectivity of the strains. We also mapped the electrostatic differences at both the sequence and structural levels for the strains from Uganda to Brazil that could suggest possible molecular mechanisms for the increase of the virulence of ZIKV. It is interesting to note that despite the small changes in the protein sequence due to the high sequence identity among the studied strains, the electrostatic properties are strongly impacted by the pH which also impact on their biomolecular interactions with partners and, consequently, the molecular viral biology. African and Asian strains are distinguishable. Exploring the interfaces used by NS1 to self-associate in different oligomeric states, and to interact with membranes and the antibody, we could map the strategy used by the ZIKV during its evolutionary process. This indicates possible molecular mechanisms that can explain the different immunological response. By the comparison with the known antibody structure available for the West Nile virus, we demonstrated that the antibody would have difficulties to neutralize the NS1 from the Brazilian strain. The present study also opens up perspectives to computationally design high specificity antibodies.Keywords: zika, biomolecular interactions, electrostatic interactions, molecular mechanisms
Procedia PDF Downloads 132462 Effectiveness of Control Measures for Ambient Fine Particulate Matters Concentration Improvement in Taiwan
Authors: Jiun-Horng Tsai, Shi-Jie, Nieh
Abstract:
Fine particulate matter (PM₂.₅) has become an important issue all over the world over the last decade. Annual mean PM₂.₅ concentration has been over the ambient air quality standard of PM₂.₅ (annual average concentration as 15μg/m³) which adapted by Taiwan Environmental Protection Administration (TEPA). TEPA, therefore, has developed a number of air pollution control measures to improve the ambient concentration by reducing the emissions of primary fine particulate matter and the precursors of secondary PM₂.₅. This study investigated the potential improvement of ambient PM₂.₅ concentration by the TEPA program and the other scenario for further emission reduction on various sources. Four scenarios had been evaluated in this study, including a basic case and three reduction scenarios (A to C). The ambient PM₂.₅ concentration was evaluated by Community Multi-scale Air Quality modelling system (CMAQ) ver. 4.7.1 along with the Weather Research and Forecasting Model (WRF) ver. 3.4.1. The grid resolutions in the modelling work are 81 km × 81 km for domain 1 (covers East Asia), 27 km × 27 km for domain 2 (covers Southeast China and Taiwan), and 9 km × 9 km for domain 3 (covers Taiwan). The result of PM₂.₅ concentration simulation in different regions of Taiwan shows that the annual average concentration of basic case is 24.9 μg/m³, and are 22.6, 18.8, and 11.3 μg/m³, respectively, for scenarios A to C. The annual average concentration of PM₂.₅ would be reduced by 9-55 % for those control scenarios. The result of scenario C (the emissions of precursors reduce to allowance levels) could improve effectively the airborne PM₂.₅ concentration to attain the air quality standard. According to the results of unit precursor reduction contribution, the allowance emissions of PM₂.₅, SOₓ, and NOₓ are 16.8, 39, and 62 thousand tons per year, respectively. In the Kao-Ping air basin, the priority for reducing precursor emissions is PM₂.₅ > NOₓ > SOₓ, whereas the priority for reducing precursor emissions is PM₂.₅ > SOₓ > NOₓ in others area. The result indicates that the target pollutants that need to be reduced in different air basin are different, and the control measures need to be adapted to local conditions.Keywords: airborne PM₂.₅, community multi-scale air quality modelling system, control measures, weather research and forecasting model
Procedia PDF Downloads 139461 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 382460 Adaptive Beamforming with Steering Error and Mutual Coupling between Antenna Sensors
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
Owing to close antenna spacing between antenna sensors within a compact space, a part of data in one antenna sensor would outflow to other antenna sensors when the antenna sensors in an antenna array operate simultaneously. This phenomenon is called mutual coupling effect (MCE). It has been shown that the performance of antenna array systems can be degraded when the antenna sensors are in close proximity. Especially, in a systems equipped with massive antenna sensors, the degradation of beamforming performance due to the MCE is significantly inevitable. Moreover, it has been shown that even a small angle error between the true direction angle of the desired signal and the steering angle deteriorates the effectiveness of an array beamforming system. However, the true direction vector of the desired signal may not be exactly known in some applications, e.g., the application in land mobile-cellular wireless systems. Therefore, it is worth developing robust techniques to deal with the problem due to the MCE and steering angle error for array beamforming systems. In this paper, we present an efficient technique for performing adaptive beamforming with robust capabilities against the MCE and the steering angle error. Only the data vector received by an antenna array is required by the proposed technique. By using the received array data vector, a correlation matrix is constructed to replace the original correlation matrix associated with the received array data vector. Then, the mutual coupling matrix due to the MCE on the antenna array is estimated through a recursive algorithm. An appropriate estimate of the direction angle of the desired signal can also be obtained during the recursive process. Based on the estimated mutual coupling matrix, the estimated direction angle, and the reconstructed correlation matrix, the proposed technique can effectively cure the performance degradation due to steering angle error and MCE. The novelty of the proposed technique is that the implementation procedure is very simple and the resulting adaptive beamforming performance is satisfactory. Simulation results show that the proposed technique provides much better beamforming performance without requiring complicated complexity as compared with the existing robust techniques.Keywords: adaptive beamforming, mutual coupling effect, recursive algorithm, steering angle error
Procedia PDF Downloads 321459 Numerical Analysis of the Response of Thin Flexible Membranes to Free Surface Water Flow
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This work is part of a major research project concerning the design of a light temporary installable textile flood control structure. The motivation for this work is the great need of applying light structures for the protection of coastal areas from detrimental effects of rapid water runoff. The prime objective of the study is the numerical analysis of the interaction among free surface water flow and slender shaped pliable structures, playing a key role in safety performance of the intended system. First, the behavior of down scale membrane is examined under hydrostatic pressure by the Abaqus explicit solver, which is part of the finite element based commercially available SIMULIA software. Then the procedure to achieve a stable and convergent solution for strongly coupled media including fluids and structures is explained. A partitioned strategy is imposed to make both structures and fluids be discretized and solved with appropriate formulations and solvers. In this regard, finite element method is again selected to analyze the structural domain. Moreover, computational fluid dynamics algorithms are introduced for solutions in flow domains by means of a commercial package of Star CCM+. Likewise, SIMULIA co-simulation engine and an implicit coupling algorithm, which are available communication tools in commercial package of the Star CCM+, enable powerful transmission of data between two applied codes. This approach is discussed for two different cases and compared with available experimental records. In one case, the down scale membrane interacts with open channel flow, where the flow velocity increases with time. The second case illustrates, how the full scale flexible flood barrier behaves when a massive flotsam is accelerated towards it.Keywords: finite element formulation, finite volume algorithm, fluid-structure interaction, light pliable structure, VOF multiphase model
Procedia PDF Downloads 186458 Prediction of Springback in U-bending of W-Temper AA6082 Aluminum Alloy
Authors: Jemal Ebrahim Dessie, Lukács Zsolt
Abstract:
High-strength aluminum alloys have drawn a lot of attention because of the expanding demand for lightweight vehicle design in the automotive sector. Due to poor formability at room temperature, warm and hot forming have been advised. However, warm and hot forming methods need more steps in the production process and an advanced tooling system. In contrast, since ordinary tools can be used, forming sheets at room temperature in the W temper condition is advantageous. However, springback of supersaturated sheets and their thinning are critical challenges and must be resolved during the use of this technique. In this study, AA6082-T6 aluminum alloy was solution heat treated at different oven temperatures and times using a specially designed and developed furnace in order to optimize the W-temper heat treatment temperature. A U-shaped bending test was carried out at different time periods between W-temper heat treatment and forming operation. Finite element analysis (FEA) of U-bending was conducted using AutoForm aiming to validate the experimental result. The uniaxial tensile and unload test was performed in order to determine the kinematic hardening behavior of the material and has been optimized in the Finite element code using systematic process improvement (SPI). In the simulation, the effect of friction coefficient & blank holder force was considered. Springback parameters were evaluated by the geometry adopted from the NUMISHEET ’93 benchmark problem. It is noted that the change of shape was higher at the more extended time periods between W-temper heat treatment and forming operation. Die radius was the most influential parameter at the flange springback. However, the change of shape shows an overall increasing tendency on the sidewall as the increase of radius of the punch than the radius of the die. The springback angles on the flange and sidewall seem to be highly influenced by the coefficient of friction than blank holding force, and the effect becomes increases as increasing the blank holding force.Keywords: aluminum alloy, FEA, springback, SPI, U-bending, W-temper
Procedia PDF Downloads 100457 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons
Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung
Abstract:
Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.Keywords: TEPC, lineal energy, microdosimetry, radiation quality
Procedia PDF Downloads 468456 Life-Cycle Cost and Life-Cycle Assessment of Photovoltaic/Thermal Systems (PV/T) in Swedish Single-Family Houses
Authors: Arefeh Hesaraki
Abstract:
The application of photovoltaic-thermal hybrids (PVT), which delivers both electricity and heat simultaneously from the same system, has become more popular during the past few years. This study addresses techno-economic and environmental impacts assessment of photovoltaic/thermal systems combined with a ground-source heat pump (GSHP) for three single-family houses located in Stockholm, Sweden. Three case studies were: (1) A renovated building built in 1936, (2) A renovated building built in 1973, and (3) A new building built-in 2013. Two simulation programs of SimaPro 9.1 and IDA Indoor Climate and Energy 4.8 (IDA ICE) were applied to analyze environmental impacts and energy usage, respectively. The cost-effectiveness of the system was evaluated using net present value (NPV), internal rate of return (IRR), and discounted payback time (DPBT) methods. In addition to cost payback time, the studied PVT system was evaluated using the energy payback time (EPBT) method. EPBT presents the time that is needed for the installed system to generate the same amount of energy which was utilized during the whole lifecycle (fabrication, installation, transportation, and end-of-life) of the system itself. Energy calculation by IDA ICE showed that a 5 m² PVT was sufficient to create a balance between the maximum heat production and the domestic hot water consumption during the summer months for all three case studies. The techno-economic analysis revealed that combining a 5 m² PVT with GSHP in the second case study possess the smallest DPBT and the highest NPV and IRR among the three case studies. It means that DPBTs (IRR) were 10.8 years (6%), 12.6 years (4%), and 13.8 years (3%) for the second, first, and the third case study, respectively. Moreover, environmental assessment of embodied energy during cradle- to- grave life cycle of the studied PVT, including fabrication, delivery of energy and raw materials, manufacture process, installation, transportation, operation phase, and end of life, revealed approximately two years of EPBT in all cases.Keywords: life-cycle cost, life-cycle assessment, photovoltaic/thermal, IDA ICE, net present value
Procedia PDF Downloads 115