Search results for: traffic simulations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2968

Search results for: traffic simulations

328 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 119
327 Experimental Research of High Pressure Jet Interaction with Supersonic Crossflow

Authors: Bartosz Olszanski, Zbigniew Nosal, Jacek Rokicki

Abstract:

An experimental study of cold-jet (nitrogen) reaction control jet system has been carried out to investigate the flow control efficiency for low to moderate jet pressure ratios (total jet pressure p0jet over free stream static pressure in the wind tunnel p∞) and different angles of attack for infinite Mach number equal to 2. An investigation of jet influence was conducted on a flat plate geometry placed in the test section of intermittent supersonic wind tunnel of Department of Aerodynamics, WUT. Various convergent jet nozzle geometries to obtain different jet momentum ratios were tested on the same test model geometry. Surface static pressure measurements, Schlieren flow visualizations (using continuous and photoflash light source), load cell measurements gave insight into the supersonic crossflow interaction for different jet pressure and jet momentum ratios and their influence on the efficiency of side jet control as described by the amplification factor (actual to theoretical net force generated by the control nozzle). Moreover, the quasi-steady numerical simulations of flow through the same wind tunnel geometry (convergent-divergent nozzle plus test section) were performed using ANSYS Fluent basing on Reynolds-Averaged Navier-Stokes (RANS) solver incorporated with k-ω Shear Stress Transport (SST) turbulence model to assess the possible spurious influence of test section walls over the jet exit near field area of interest. The strong bow shock, barrel shock, and Mach disk as well as lambda separation region in front of nozzle were observed as images taken by high-speed camera examine the interaction of the jet and the free stream. In addition, the development of large-scale vortex structures (counter-rotating vortex pair) was detected. The history of complex static pressure pattern on the plate was recorded and compared to the force measurement data as well as numerical simulation data. The analysis of the obtained results, especially in the wake of the jet showed important features of the interaction mechanisms between the lateral jet and the flow field.

Keywords: flow visualization techniques, pressure measurements, reaction control jet, supersonic cross flow

Procedia PDF Downloads 276
326 Learning Gains and Constraints Resulting from Haptic Sensory Feedback among Preschoolers' Engagement during Science Experimentation

Authors: Marios Papaevripidou, Yvoni Pavlou, Zacharias Zacharia

Abstract:

Embodied cognition and additional (touch) sensory channel theories indicate that physical manipulation is crucial to learning since it provides, among others, touch sensory input, which is needed for constructing knowledge. Given these theories, the use of Physical Manipulatives (PM) becomes a prerequisite for learning. On the other hand, empirical research on Virtual Manipulatives (VM) (e.g., simulations) learning has provided evidence showing that the use of PM, and thus haptic sensory input, is not always a prerequisite for learning. In order to investigate which means of experimentation, PM or VM, are required for enhancing student science learning at the kindergarten level, an empirical study was conducted that sought to investigate the impact of haptic feedback on the conceptual understanding of pre-school students (n=44, age mean=5,7) in three science domains: beam balance (D1), sinking/floating (D2) and springs (D3). The participants were equally divided in two groups according to the type of manipulatives used (PM: presence of haptic feedback, VM: absence of haptic feedback) during a semi-structured interview for each of the domains. All interviews followed the Predict-Observe-Explain (POE) strategy and consisted of three phases: initial evaluation, experimentation, final evaluation. The data collected through the interviews were analyzed qualitatively (open-coding for identifying students’ ideas in each domain) and quantitatively (use of non-parametric tests). Findings revealed that the haptic feedback enabled students to distinguish heavier to lighter objects when held in hands during experimentation. In D1 the haptic feedback did not differentiate PM and VM students' conceptual understanding of the function of the beam as a mean to compare the mass of objects. In D2 the haptic feedback appeared to have a negative impact on PM students’ learning. Feeling the weight of an object strengthen PM students’ misconception that heavier objects always sink, whereas the scientifically correct idea that the material of an object determines its sinking/floating behavior in the water was found to be significantly higher among the VM students than the PM ones. In D3 the PM students outperformed significantly the VM students with regard to the idea that the heavier an object is the more the spring will expand, indicating that the haptic input experienced by the PM students served as an advantage to their learning. These findings point to the fact that PMs, and thus touch sensory input, might not always be a requirement for science learning and that VMs could be considered, under certain circumstances, as a viable means for experimentation.

Keywords: haptic feedback, physical and virtual manipulatives, pre-school science learning, science experimentation

Procedia PDF Downloads 118
325 The Closed Cavity Façade (CCF): Optimization of CCF for Enhancing Energy Efficiency and Indoor Environmental Quality in Office Buildings

Authors: Michalis Michael, Mauro Overend

Abstract:

Buildings, in which we spend 87-90% of our time, act as a shelter protecting us from environmental conditions and weather phenomena. The building's overall performance is significantly dependent on the envelope’s glazing part, which is particularly critical as it is the most vulnerable part to heat gain and heat loss. However, conventional glazing technologies have relatively low-performance thermo-optical characteristics. In this regard, during winter, the heat losses due to the glazing part of a building envelope are significantly increased as well as the heat gains during the summer period. In this study, the contribution of an innovative glazing technology, namely Closed Cavity Façade (CCF) in improving energy efficiency and IEQ in office buildings is examined, aiming to optimize various design configurations of CCF. Using Energy Plus and IDA ICE packages, the performance of several CCF configurations and geometries for various climate types were investigated, aiming to identify the optimum solution. The model used for the simulations and optimization process was MATELab, a recently constructed outdoor test facility at the University of Cambridge (UK). The model was previously experimentally calibrated. The study revealed that the use of CCF technology instead of conventional double or triple glazing leads to important benefits. Particularly, the replacement of the traditional glazing units, used as the baseline, with the optimal configuration of CCF led to a decrease in energy consumption in the range of 18-37% (depending on the location). This mainly occurs due to integrating shading devices in the cavity and applying proper glass coatings and control strategies, which lead to improvement of thermal transmittance and g-value of the glazing. Since the solar gain through the façade is the main contributor to energy consumption during cooling periods, it was observed that a higher energy improvement is achieved in cooling-dominated locations. Furthermore, it was shown that a suitable selection of the constituents of a closed cavity façade, such as the colour and type of shading devices and the type of coatings, leads to an additional improvement of its thermal performance, avoiding overheating phenomena and consequently ensuring temperatures in the glass cavity below the critical value, and reducing the radiant discomfort providing extra benefits in terms of Indoor Environmental Quality (IEQ).

Keywords: building energy efficiency, closed cavity façade, optimization, occupants comfort

Procedia PDF Downloads 50
324 The Impact of Climate Change on Typical Material Degradation Criteria over Timurid Historical Heritage

Authors: Hamed Hedayatnia, Nathan Van Den Bossche

Abstract:

Understanding the ways in which climate change accelerates or slows down the process of material deterioration is the first step towards assessing adaptive approaches for the conservation of historical heritage. Analysis of the climate change effects on the degradation risk assessment parameters like freeze-thaw cycles and wind erosion is also a key parameter when considering mitigating actions. Due to the vulnerability of cultural heritage to climate change, the impact of this phenomenon on material degradation criteria with the focus on brick masonry walls in Timurid heritage, located in Iran, was studied. The Timurids were the final great dynasty to emerge from the Central Asian steppe. Through their patronage, the eastern Islamic world in northwestern of Iran, especially in Mashhad and Herat, became a prominent cultural center. Goharshad Mosque is a mosque in Mashhad of the Razavi Khorasan Province, Iran. It was built by order of Empress Goharshad, the wife of Shah Rukh of the Timurid dynasty in 1418 CE. Choosing an appropriate regional climate model was the first step. The outputs of two different climate model: the 'ALARO-0' and 'REMO,' were analyzed to find out which model is more adopted to the area. For validating the quality of the models, a comparison between model data and observations was done in 4 different climate zones in Iran for a period of 30 years. The impacts of the projected climate change were evaluated until 2100. To determine the material specification of Timurid bricks, standard brick samples from a Timurid mosque were studied. Determination of water absorption coefficient, defining the diffusion properties and determination of real density, and total porosity tests were performed to characterize the specifications of brick masonry walls, which is needed for running HAM-simulations. Results from the analysis showed that the threatening factors in each climate zone are almost different, but the most effective factor around Iran is the extreme temperature increase and erosion. In the north-western region of Iran, one of the key factors is wind erosion. In the north, rainfall erosion and mold growth risk are the key factors. In the north-eastern part, in which our case study is located, the important parameter is wind erosion.

Keywords: brick, climate change, degradation criteria, heritage, Timurid period

Procedia PDF Downloads 105
323 Proposed Design of an Optimized Transient Cavity Picosecond Ultraviolet Laser

Authors: Marilou Cadatal-Raduban, Minh Hong Pham, Duong Van Pham, Tu Nguyen Xuan, Mui Viet Luong, Kohei Yamanoi, Toshihiko Shimizu, Nobuhiko Sarukura, Hung Dai Nguyen

Abstract:

There is a great deal of interest in developing all-solid-state tunable ultrashort pulsed lasers emitting in the ultraviolet (UV) region for applications such as micromachining, investigation of charge carrier relaxation in conductors, and probing of ultrafast chemical processes. However, direct short-pulse generation is not as straight forward in solid-state gain media as it is for near-IR tunable solid-state lasers such as Ti:sapphire due to the difficulty of obtaining continuous wave laser operation, which is required for Kerr lens mode-locking schemes utilizing spatial or temporal Kerr type nonlinearity. In this work, the transient cavity method, which was reported to generate ultrashort laser pulses in dye lasers, is extended to a solid-state gain medium. Ce:LiCAF was chosen among the rare-earth-doped fluoride laser crystals emitting in the UV region because of its broad tunability (from 280 to 325 nm) and enough bandwidth to generate 3-fs pulses, sufficiently large effective gain cross section (6.0 x10⁻¹⁸ cm²) favorable for oscillators, and a high saturation fluence (115 mJ/cm²). Numerical simulations are performed to investigate the spectro-temporal evolution of the broadband UV laser emission from Ce:LiCAF, represented as a system of two homogeneous broadened singlet states, by solving the rate equations extended to multiple wavelengths. The goal is to find the appropriate cavity length and Q-factor to achieve the optimal photon cavity decay time and pumping energy for resonator transients that will lead to ps UV laser emission from a Ce:LiCAF crystal pumped by the fourth harmonics (266nm) of a Nd:YAG laser. Results show that a single ps pulse can be generated from a 1-mm, 1 mol% Ce³⁺-doped LiCAF crystal using an output coupler with 10% reflectivity (low-Q) and an oscillator cavity that is 2-mm long (short cavity). This technique can be extended to other fluoride-based solid-state laser gain media.

Keywords: rare-earth-doped fluoride gain medium, transient cavity, ultrashort laser, ultraviolet laser

Procedia PDF Downloads 341
322 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 350
321 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example

Authors: Wang Yang

Abstract:

Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.

Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map

Procedia PDF Downloads 90
320 Determination of the Needs for Development of Infertility Psycho-Educational Program and the Design of a Website about Infertility for University Students

Authors: Bahar Baran, Şirin Nur Kaptan, D.Yelda Kağnıcı, Erol Esen, Barışcan Öztürk, Ender Siyez, Diğdem M Siyez

Abstract:

It is known that some factors associated with infertility have preventable characteristics and that young people's knowledge levels in this regard are inadequate, but very few studies focus on effective prevention studies on infertility. Psycho-educational programs have an important place for infertility prevention efforts. Nowadays, considering the households' utilization rates from technology and the Internet, it seems that young people have applied to websites as a primary source of information related to a health problem they have encountered. However, one of the prerequisites for the effectiveness of websites or face-to-face psycho-education programs is to consider the needs of participants. In particular, it is expected that these programs will be appropriate to the cultural infrastructure and the diversity of beliefs and values in society. The aim of this research is to determine what university students want to learn about infertility and fertility and examine their views on the structure of the website. The sample of the research consisted of 9693 university students who study in 21 public higher education programs in Turkey. 51.6 % (n = 5002) were female and 48.4% (n = 4691) were male. The Needs Analysis Questionnaire developed by the researchers was used as data collection tool in the research. In the analysis of the data, descriptive analysis was conducted in SPSS software. According to the findings, among the topics that university students wanted to study about infertility and fertility, the first topics were 'misconceptions about infertility' (94.9 %), 'misconceptions about sexual behaviors' (94.6 %), 'factors affecting infertility' (92.8 %), 'sexual health and reproductive health' (92.5 %), 'sexually transmitted diseases' (92.7 %), 'sexuality and society' (90.9 %), 'healthy life (help centers)' (90.4 %). In addition, the questions about how the content of the website should be designed for university students were analyzed descriptively. According to the results, 91.5 % (n = 8871) of the university students proposed to use frequently asked questions and their answers, 89.2 % (n = 8648) stated that expert video should be included, 82.6 % (n = 8008) requested animations and simulations, 76.1 % (n = 7380) proposed different content according to sex and 66 % (n = 6460) proposed different designs according to sex. The results of the research indicated that the findings are similar to the contents of the program carried out in other countries in terms of the topics to be studied. It is suggested to take into account the opinions of the participants during the design of website.

Keywords: infertility, prevention, psycho-education, web based education

Procedia PDF Downloads 198
319 Wind Generator Control in Isolated Site

Authors: Glaoui Hachemi

Abstract:

Wind has been proven as a cost effective and reliable energy source. Technological advancements over the last years have placed wind energy in a firm position to compete with conventional power generation technologies. Algeria has a vast uninhabited land area where the south (desert) represents the greatest part with considerable wind regime. In this paper, an analysis of wind energy utilization as a viable energy substitute in six selected sites widely distributed all over the south of Algeria is presented. In this presentation, wind speed frequency distributions data obtained from the Algerian Meteorological Office are used to calculate the average wind speed and the available wind power. The annual energy produced by the Fuhrlander FL 30 wind machine is obtained using two methods. The analysis shows that in the southern Algeria, at 10 m height, the available wind power was found to vary between 160 and 280 W/m2, except for Tamanrasset. The highest potential wind power was found at Adrar, with 88 % of the time the wind speed is above 3 m/s. Besides, it is found that the annual wind energy generated by that machine lie between 33 and 61 MWh, except for Tamanrasset, with only 17 MWh. Since the wind turbines are usually installed at a height greater than 10 m, an increased output of wind energy can be expected. However, the wind resource appears to be suitable for power production on the south and it could provide a viable substitute to diesel oil for irrigation pumps and electricity generation. In this paper, a model of the wind turbine (WT) with permanent magnet generator (PMSG) and its associated controllers is presented. The increase of wind power penetration in power systems has meant that conventional power plants are gradually being replaced by wind farms. In fact, today wind farms are required to actively participate in power system operation in the same way as conventional power plants. In fact, power system operators have revised the grid connection requirements for wind turbines and wind farms, and now demand that these installations be able to carry out more or less the same control tasks as conventional power plants. For dynamic power system simulations, the PMSG wind turbine model includes an aerodynamic rotor model, a lumped mass representation of the drive train system and generator model. In this paper, we propose a model with an implementation in MATLAB / Simulink, each of the system components off-grid small wind turbines.

Keywords: windgenerator systems, permanent magnet synchronous generator (PMSG), wind turbine (WT) modeling, MATLAB simulink environment

Procedia PDF Downloads 320
318 Environmental Aspects of Alternative Fuel Use for Transport with Special Focus on Compressed Natural Gas (CNG)

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej

Abstract:

The history of gaseous fuel use in the motive power of vehicles dates back to the second half of the nineteenth century, and thus the beginnings of the automotive industry. The engines were powered by coal gas and became the prototype for internal combustion engines built so far. It can thus be considered that this construction gave rise to the automotive industry. As the socio-economic development advances, so does the number of motor vehicles. Although, due to technological progress in recent decades, the emissions generated by internal combustion engines of cars have been reduced, a sharp increase in the number of cars and the rapidly growing traffic are an important source of air pollution and a major cause of acoustic threat, in particular in large urban agglomerations. One of the solutions, in terms of reducing exhaust emissions and improving air quality, is a more extensive use of alternative fuels: CNG, LNG, electricity and hydrogen. In the case of electricity use for transport, it should be noted that the environmental outcome depends on the structure of electricity generation. The paper shows selected regulations affecting the use of alternative fuels for transport (including Directive 2014/94/EU) and its dynamics between 2000 and 2015 in Poland and selected EU countries. The paper also gives a focus on the impact of alternative fuels on the environment by comparing the volume of individual emissions (compared to the emissions from conventional fuels: petrol and diesel oil). Bearing in mind that the extent of various alternative fuel use is determined in first place by economic conditions, the article describes the price relationships between alternative and conventional fuels in Poland and selected EU countries. It is pointed out that although Poland has a wealth of experience in using methane alternative fuels for transport, one of the main barriers to their development in Poland is the extensive use of LPG. In addition, a poorly developed network of CNG stations in Poland, which does not allow easy transport, especially in the northern part of the country, is a serious problem to a further development of CNG use as fuel for transport. An interesting solution to this problem seems to be the use of home CNG filling stations: Home Refuelling Appliance (HRA, refuelling time 8-10 hours) and Home Refuelling Station (HRS, refuelling time 8-10 minutes). The team is working on HRA and HRS technologies. The article also highlights the impact of alternative fuel use on energy security by reducing reliance on imports of crude oil and petroleum products.

Keywords: alternative fuels, CNG (Compressed Natural Gas), CNG stations, LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles), pollutant emissions

Procedia PDF Downloads 212
317 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 127
316 CFD Modeling of Stripper Ash Cooler of Circulating Fluidized Bed

Authors: Ravi Inder Singh

Abstract:

Due to high heat transfer rate, high carbon utilizing efficiency, fuel flexibilities and other advantages numerous circulating fluidized bed boilers have grown up in India in last decade. Many companies like BHEL, ISGEC, Thermax, Cethar Limited, Enmas GB Power Systems Projects Limited are making CFBC and installing the units throughout the India. Due to complexity many problems exists in CFBC units and only few have been reported. Agglomeration i.e clinker formation in riser, loop seal leg and stripper ash coolers is one of problem industry is facing. Proper documentation is rarely found in the literature. Circulating fluidized bed (CFB) boiler bottom ash contains large amounts of physical heat. While the boiler combusts the low-calorie fuel, the ash content is normally more than 40% and the physical heat loss is approximately 3% if the bottom ash is discharged without cooling. In addition, the red-hot bottom ash is bad for mechanized handling and transportation, as the upper limit temperature of the ash handling machinery is 200 °C. Therefore, a bottom ash cooler (BAC) is often used to treat the high temperature bottom ash to reclaim heat, and to have the ash easily handled and transported. As a key auxiliary device of CFB boilers, the BAC has a direct influence on the secure and economic operation of the boiler. There are many kinds of BACs equipped for large-scale CFB boilers with the continuous development and improvement of the CFB boiler. These ash coolers are water cooled ash cooling screw, rolling-cylinder ash cooler (RAC), fluidized bed ash cooler (FBAC).In this study prototype of a novel stripper ash cooler is studied. The Circulating Fluidized bed Ash Coolers (CFBAC) combined the major technical features of spouted bed and bubbling bed, and could achieve the selective discharge on the bottom ash. The novel stripper ash cooler is bubbling bed and it is visible cold test rig. The reason for choosing cold test is that high temperature is difficult to maintain and create in laboratory level. The aim of study to know the flow pattern inside the stripper ash cooler. The cold rig prototype is similar to stripper ash cooler used industry and it was made after scaling down to some parameter. The performance of a fluidized bed ash cooler is studied using a cold experiment bench. The air flow rate, particle size of the solids and air distributor type are considered to be the key parameters of the operation of a fluidized bed ash cooler (FBAC) are studied in this.

Keywords: CFD, Eulerian-Eulerian, Eulerian-Lagraingian model, parallel simulations

Procedia PDF Downloads 498
315 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure

Authors: Kai Zhang, Xi Jiang

Abstract:

Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.

Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification

Procedia PDF Downloads 261
314 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling

Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer

Abstract:

The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.

Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor

Procedia PDF Downloads 260
313 Lessons from Patients Expired due to Severe Head Injuries Treated in Intensive Care Unit of Lady Reading Hospital Peshawar

Authors: Mumtaz Ali, Hamzullah Khan, Khalid Khanzada, Shahid Ayub, Aurangzeb Wazir

Abstract:

Objective: To analyse the death of patients treated in neuro-surgical ICU for severe head injuries from different perspectives. The evaluation of the data so obtained to help improve the health care delivery to this group of patients in ICU. Study Design: It is a descriptive study based on retrospective analysis of patients presenting to neuro-surgical ICU in Lady Reading Hospital, Peshawar. Study Duration: It covered the period between 1st January 2009 to 31st December 2009. Material and Methods: The Clinical record of all the patients presenting with the clinical radiological and surgical features of severe head injuries, who expired in neuro-surgical ICU was collected. A separate proforma which mentioned age, sex, time of arrival and death, causes of head injuries, the radiological features, the clinical parameters, the surgical and non surgical treatment given was used. The average duration of stay and the demographic and domiciliary representation of these patients was noted. The record was analyzed accordingly for discussion and recommendations. Results: Out of the total 112 (n-112) patients who expired in one year in the neuro-surgical ICU the young adults made up the majority 64 (57.14%) followed by children, 34 (30.35%) and then the elderly age group: 10 (8.92%). Road traffic accidents were the major cause of presentation, 75 (66.96%) followed by history of fall; 23 (20.53%) and then the fire arm injuries; 13 (11.60%). The predominant CT scan features of these patients on presentation was cerebral edema, and midline shift (diffuse neuronal injuries). 46 (41.07%) followed by cerebral contusions. 28 (25%). The correctable surgical causes were present only in 18 patients (16.07%) and the majority 94 (83.92%) were given conservative management. Of the 69 (n=69) patients in which CT scan was repeated; 62 (89.85%) showed worsening of the initial CT scan abnormalities while in 7 cases (10.14%) the features were static. Among the non surgical cases both ventilatory therapy in 7 (6.25%) and tracheostomy in 39 (34.82%) failed to change the outcome. The maximum stay in the neuro ICU leading upto the death was 48 hours in 35 (31.25%) cases followed by 31 (27.67%) cases in 24 hours; 24 (21.42%) in one week and 16 (14.28%) in 72 hours. Only 6 (5.35%) patients survived more than a week. Patients were received from almost all the districts of NWFP except. The Hazara division. There were some Afghan refugees as well. Conclusion: Mortality following the head injuries is alarmingly high despite repeated claims about the professional and administrative improvement. Even places like ICU could not change the out come according to the desired aims and objectives in the present set up. A rethinking is needed both at the individual and institutional level among the concerned quarters with a clear aim at the more scientific grounds. Only then one can achieve the desired results.

Keywords: Glasgow Coma Scale, pediatrics, geriatrics, Peshawar

Procedia PDF Downloads 334
312 Assessment of Indoor Air Pollution in Naturally Ventilated Dwellings of Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air pollution as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. This study attempts to demonstrate the causal relationship between the indoor air pollution and its determining aspects. Detailed indoor air pollution audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavourable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group in the urban area of the metropolitan. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses the relationship between indoor air pollution levels and factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, air pollution, architecture, urban environment

Procedia PDF Downloads 91
311 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy

Procedia PDF Downloads 321
310 In Vivo Evaluation of Exposure to Electromagnetic Fields at 27 GHz (5G) of Danio Rerio: A Preliminary Study

Authors: Elena Maria Scalisi, Roberta Pecoraro, Martina Contino, Sara Ignoto, Carmelo Iaria, Santi Concetto Pavone, Gino Sorbello, Loreto Di Donato, Maria Violetta Brundo

Abstract:

5G Technology is evolving to satisfy a variety of service requirements that may allow high data-rate connections (1Gbps) and lower latency times than current (<1ms). In order to support a high data transmission speed and a high traffic service for eMBB (enhanced mobile broadband) use cases, 5G systems have the characteristic of using different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus taking advantage of higher frequencies than previous mobile radio generations (1G-4G). However, waves at higher frequencies have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern over the past few months about possible harmful effects on human health. The aim of this preliminary study is to evaluate possible short term effects induced by 5G-millimeter waves on embryonic development and early life stages of Danio rerio by Z-FET. We exposed developing zebrafish at frequency of 27 GHz, with a standard pyramidal horn antenna placed at 15 cm far from the samples holder ensuring an incident power density of 10 mW/cm2. During the exposure cycle, from 6 h post fertilization (hpf) to 96 hpf, we measured a different morphological endpoints every 24 hours. Zebrafish embryo toxicity test (Z-FET) is a short term test, carried out on fertilized eggs of zebrafish and it represents an effective alternative to acute test with adult fish (OECD, 2013). We have observed that 5G did not reveal significant impacts on mortality nor on morphology because exposed larvae showed a normal detachment of the tail, presence of heartbeat, well-organized somites, therefore hatching rate was lower than untreated larvae even at 48 h of exposure. Moreover, the immunohistochemical analysis performed on larvae showed a negativity to the HSP-70 expression used as a biomarkers. This is a preliminary study on evaluation of potential toxicity induced by 5G and it seems appropriate to underline the importance that further studies would take, aimed at clarifying the probable real risk of exposure to electromagnetic fields.

Keywords: Biomarker of exposure, embryonic development, 5G waves, zebrafish embryo toxicity test

Procedia PDF Downloads 107
309 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 115
308 Demarcating Wetting States in Pressure-Driven Flows by Poiseuille Number

Authors: Anvesh Gaddam, Amit Agrawal, Suhas Joshi, Mark Thompson

Abstract:

An increase in surface area to volume ratio with a decrease in characteristic length scale, leads to a rapid increase in pressure drop across the microchannel. Texturing the microchannel surfaces reduce the effective surface area, thereby decreasing the pressured drop. Surface texturing introduces two wetting states: a metastable Cassie-Baxter state and stable Wenzel state. Predicting wetting transition in textured microchannels is essential for identifying optimal parameters leading to maximum drag reduction. Optical methods allow visualization only in confined areas, therefore, obtaining whole-field information on wetting transition is challenging. In this work, we propose a non-invasive method to capture wetting transitions in textured microchannels under flow conditions. To this end, we tracked the behavior of the Poiseuille number Po = f.Re, (with f the friction factor and Re the Reynolds number), for a range of flow rates (5 < Re < 50), and different wetting states were qualitatively demarcated by observing the inflection points in the f.Re curve. Microchannels with both longitudinal and transverse ribs with a fixed gas fraction (δ, a ratio of shear-free area to total area) and at a different confinement ratios (ε, a ratio of rib height to channel height) were fabricated. The measured pressure drop values for all the flow rates across the textured microchannels were converted into Poiseuille number. Transient behavior of the pressure drop across the textured microchannels revealed the collapse of liquid-gas interface into the gas cavities. Three wetting states were observed at ε = 0.65 for both longitudinal and transverse ribs, whereas, an early transition occurred at Re ~ 35 for longitudinal ribs at ε = 0.5, due to spontaneous flooding of the gas cavities as the liquid-gas interface ruptured at the inlet. In addition, the pressure drop in the Wenzel state was found to be less than the Cassie-Baxter state. Three-dimensional numerical simulations confirmed the initiation of the completely wetted Wenzel state in the textured microchannels. Furthermore, laser confocal microscopy was employed to identify the location of the liquid-gas interface in the Cassie-Baxter state. In conclusion, the present method can overcome the limitations posed by existing techniques, to conveniently capture wetting transition in textured microchannels.

Keywords: drag reduction, Poiseuille number, textured surfaces, wetting transition

Procedia PDF Downloads 144
307 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: erosion, finite volume method, sediment transport, shallow water equations

Procedia PDF Downloads 205
306 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 119
305 Time to Retire Rubber Crumb: How Soft Fall Playgrounds are Threatening Australia’s Great Barrier Reef

Authors: Michelle Blewitt, Scott P. Wilson, Heidi Tait, Juniper Riordan

Abstract:

Rubber crumb is a physical and chemical pollutant of concern for the environment and human health, warranting immediate investigations into its pathways to the environment and potential impacts. This emerging microplastic is created by shredding end-of-life tyres into ‘rubber crumb’ particles between 1-5mm used on synthetic turf fields and soft-fall playgrounds as a solution to intensifying tyre waste worldwide. Despite having known toxic and carcinogenic properties, studies into the transportation pathways and movement patterns of rubber crumbs from these surfaces remain in their infancy. To address this deficit, AUSMAP, the Australian Microplastic Assessment Project, in partnership with the Tangaroa Blue Foundation, conducted a study to quantify crumb loss from soft-fall surfaces. To our best knowledge, this is the first of its kind, with funding for the audits being provided by the Australian Government’s Reef Trust. Sampling occurred at 12 soft-fall playgrounds within the Great Barrier Reef Catchment Area on Australia’s North-East coast, in close proximity to the United Nations World Heritage Listed Reef. Samples were collected over a 12-month period using randomized sediment cores at 0, 2 and 4 meters away from the playground edge along a 20-meter transect. This approach facilitated two objectives pertaining to particle movement: to establish that crumb loss is occurring and that it decreases with distance from the soft-fall surface. Rubber crumb abundance was expressed as a total value and used to determine an expected average of rubber crumb loss per m2. An Analysis of Variance (ANOVA) was used to compare the differences in crumb abundance at each interval from the playground. Site characteristics, including surrounding sediment type, playground age, degree of ultra-violet exposure and amount of foot traffic, were additionally recorded for the comparison. Preliminary findings indicate that crumb is being lost at considerable rates from soft-fall playgrounds in the region, emphasizing an urgent need to further examine it as a potential source of aquatic pollution, soil contamination and threat to individuals who regularly utilize these surfaces. Additional implications for the future of rubber crumbs as a fit-for-purpose recycling initiative will be discussed with regard to industry, governments and the economic burden of surface maintenance and/ or replacement.

Keywords: microplastics, toxic rubber crumb, litter pathways, marine environment

Procedia PDF Downloads 75
304 An Approach for the Capture of Carbon Dioxide via Polymerized Ionic Liquids

Authors: Ghassan Mohammad Alalawi, Abobakr Khidir Ziyada, Abdulmajeed Khan

Abstract:

A potential alternative or next-generation CO₂-selective separation medium that has lately been suggested is ionic liquids (ILs). It is more facile to "tune" the solubility and selectivity of CO₂ in ILs compared to organic solvents via modification of the cation and/or anion structures. Compared to ionic liquids at ambient temperature, polymerized ionic liquids exhibited increased CO₂ sorption capacities and accelerated sorption/desorption rates. This research aims to investigate the correlation between the CO₂ sorption rate and capacity of poly ionic liquids (pILs) and the chemical structure of these substances. The dependency of sorption on the ion conductivity of the pILs' cations and anions is one of the theories we offered to explain the attraction between CO₂ and pILs. This assumption was supported by the Monte Carlo molecular dynamics simulations results, which demonstrated that CO₂ molecules are localized around both cations and anions and that their sorption depends on the cations' and anions' ion conductivities. Polymerized ionic liquids are synthesized to investigate the impact of substituent alkyl chain length, cation, and anion on CO₂ sorption rate and capacity. Three stages are involved in synthesizing the pILs under study: first, trialkyl amine and vinyl benzyl chloride are directly quaternized to obtain the required cation. Next, anion exchange is performed, and finally, the obtained IL is polymerized to form the desired product (pILs). The synthesized pILs' structures were confirmed using elemental analysis and NMR. The synthesized pILs are characterized by examining their structure topology, chloride content, density, and thermal stability using SEM, ion chromatography (using a Metrohm Model 761 Compact IC apparatus), ultrapycnometer, and TGA. As determined by the CO₂ sorption results using a magnetic suspension balance (MSB) apparatus, the sorption capacity of pILs is dependent on the cation and anion ion conductivities. The anion's size also influences the CO₂ sorption rate and capacity. It was discovered that adding water to pILs caused a dramatic, systematic enlargement of pILs resulting in a significant increase in their capacity to absorb CO₂ under identical conditions, contingent on the type of gas, gas flow, applied gas pressure, and water content of the pILs. Along with its capacity to increase surface area through expansion, water also possesses highly high ion conductivity for cations and anions, enhancing its ability to absorb CO₂.

Keywords: polymerized ionic liquids, carbon dioxide, swelling, characterization

Procedia PDF Downloads 44
303 Dimensionality Control of Li Transport by MOFs Based Quasi-Solid to Solid Electrolyte

Authors: Manuel Salado, Mikel Rincón, Arkaitz Fidalgo, Roberto Fernandez, Senentxu Lanceros-Méndez

Abstract:

Lithium-ion batteries (LIBs) are a promising technology for energy storage, but they suffer from safety concerns due to the use of flammable organic solvents in their liquid electrolytes. Solid-state electrolytes (SSEs) offer a potential solution to this problem, but they have their own limitations, such as poor ionic conductivity and high interfacial resistance. The aim of this research was to develop a new type of SSE based on metal-organic frameworks (MOFs) and ionic liquids (ILs). MOFs are porous materials with high surface area and tunable electronic properties, making them ideal for use in SSEs. ILs are liquid electrolytes that are non-flammable and have high ionic conductivity. A series of MOFs were synthesized, and their electrochemical properties were evaluated. The MOFs were then infiltrated with ILs to form a quasi-solid gel and solid xerogel SSEs. The ionic conductivity, interfacial resistance, and electrochemical performance of the SSEs were characterized. The results showed that the MOF-IL SSEs had significantly higher ionic conductivity and lower interfacial resistance than conventional SSEs. The SSEs also exhibited excellent electrochemical performance, with high discharge capacity and long cycle life. The development of MOF-IL SSEs represents a significant advance in the field of solid-state electrolytes. The high ionic conductivity and low interfacial resistance of the SSEs make them promising candidates for use in next-generation LIBs. The data for this research was collected using a variety of methods, including X-ray diffraction, scanning electron microscopy, and electrochemical impedance spectroscopy. The data was analyzed using a variety of statistical and computational methods, including principal component analysis, density functional theory, and molecular dynamics simulations. The main question addressed by this research was whether MOF-IL SSEs could be developed that have high ionic conductivity, low interfacial resistance, and excellent electrochemical performance. The results of this research demonstrate that MOF-IL SSEs are a promising new type of solid-state electrolyte for use in LIBs. The SSEs have high ionic conductivity, low interfacial resistance, and excellent electrochemical performance. These properties make them promising candidates for use in next-generation LIBs that are safer and have higher energy densities.

Keywords: energy storage, solid-electrolyte, ionic liquid, metal-organic-framework, electrochemistry, organic inorganic plastic crystal

Procedia PDF Downloads 64
302 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 114
301 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations

Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer

Abstract:

In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.

Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control

Procedia PDF Downloads 121
300 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading

Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger

Abstract:

Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.

Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels

Procedia PDF Downloads 104
299 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays

Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir

Abstract:

Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.

Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis

Procedia PDF Downloads 100