Search results for: thermal simulation
1414 Encoded Fiber Optic Sensors for Simultaneous Multipoint Sensing
Authors: C. Babu Rao, Pandian Chelliah
Abstract:
Owing to their reliability, a number of fluorescent spectra based fiber optic sensors have been developed for detection and identification of hazardous chemicals such as explosives, narcotics etc. In High security regions, such as airports, it is important to monitor simultaneously multiple locations. This calls for deployment of a portable sensor at each location. However, the selectivity and sensitivity of these techniques depends on the spectral resolution of the spectral analyzer. The better the resolution the larger the repertoire of chemicals that can be detected. A portable unit will have limitations in meeting these requirements. Optical fibers can be employed for collecting and transmitting spectral signal from the portable sensor head to a sensitive central spectral analyzer (CSA). For multipoint sensing, optical multiplexing of multiple sensor heads with CSA has to be adopted. However with multiplexing, when one sensor head is connected to CSA, the rest may remain unconnected for the turn-around period. The larger the number of sensor heads the larger this turn-around time will be. To circumvent this imitation, we propose in this paper, an optical encoding methodology to use multiple portable sensor heads connected to a single CSA. Each portable sensor head is assigned an unique address. Spectra of every chemical detected through this sensor head, are encoded by its unique address and can be identified at the CSA end. The methodology proposed is demonstrated through a simulation using Matlab SIMULINK.Keywords: optical encoding, fluorescence, multipoint sensing
Procedia PDF Downloads 7101413 Fermentation of Pretreated Herbaceous Cellulosic Wastes to Ethanol by Anaerobic Cellulolytic and Saccharolytic Thermophilic Clostridia
Authors: Lali Kutateladze, Tamar Urushadze, Tamar Dudauri, Besarion Metreveli, Nino Zakariashvili, Izolda Khokhashvili, Maya Jobava
Abstract:
Lignocellulosic waste streams from agriculture, paper and wood industry are renewable, plentiful and low-cost raw materials that can be used for large-scale production of liquid and gaseous biofuels. As opposed to prevailing multi-stage biotechnological processes developed for bioconversion of cellulosic substrates to ethanol where high-cost cellulase preparations are used, Consolidated Bioprocessing (CBP) offers to accomplish cellulose and xylan hydrolysis followed by fermentation of both C6 and C5 sugars to ethanol in a single-stage process. Syntrophic microbial consortium comprising of anaerobic, thermophilic, cellulolytic, and saccharolytic bacteria in the genus Clostridia with improved ethanol productivity and high tolerance to fermentation end-products had been proposed for achieving CBP. 65 new strains of anaerobic thermophilic cellulolytic and saccharolytic Clostridia were isolated from different wetlands and hot springs in Georgia. Using new isolates, fermentation of mechanically pretreated wheat straw and corn stalks was done under oxygen-free nitrogen environment in thermophilic conditions (T=550C) and pH 7.1. Process duration was 120 hours. Liquid and gaseous products of fermentation were analyzed on a daily basis using Perkin-Elmer gas chromatographs with flame ionization and thermal detectors. Residual cellulose, xylan, xylose, and glucose were determined using standard methods. Cellulolytic and saccharolytic bacteria strains degraded mechanically pretreated herbaceous cellulosic wastes and fermented glucose and xylose to ethanol, acetic acid and gaseous products like hydrogen and CO2. Specifically, maximum yield of ethanol was reached at 96 h of fermentation and varied between 2.9 – 3.2 g/ 10 g of substrate. The content of acetic acid didn’t exceed 0.35 g/l. Other volatile fatty acids were detected in trace quantities.Keywords: anaerobic bacteria, cellulosic wastes, Clostridia sp, ethanol
Procedia PDF Downloads 2941412 Cupric Oxide Thin Films for Optoelectronic Application
Authors: Sanjay Kumar, Dinesh Pathak, Sudhir Saralch
Abstract:
Copper oxide is a semiconductor that has been studied for several reasons such as the natural abundance of starting material copper (Cu); the easiness of production by Cu oxidation; their non-toxic nature and the reasonably good electrical and optical properties. Copper oxide is well-known as cuprite oxide. The cuprite is p-type semiconductors having band gap energy of 1.21 to 1.51 eV. As a p-type semiconductor, conduction arises from the presence of holes in the valence band (VB) due to doping/annealing. CuO is attractive as a selective solar absorber since it has high solar absorbency and a low thermal emittance. CuO is very promising candidate for solar cell applications as it is a suitable material for photovoltaic energy conversion. It has been demonstrated that the dip technique can be used to deposit CuO films in a simple manner using metallic chlorides (CuCl₂.2H₂O) as a starting material. Copper oxide films are prepared using a methanolic solution of cupric chloride (CuCl₂.2H₂O) at three baking temperatures. We made three samples, after heating which converts to black colour. XRD data confirm that the films are of CuO phases at a particular temperature. The optical band gap of the CuO films calculated from optical absorption measurements is 1.90 eV which is quite comparable to the reported value. Dip technique is a very simple and low-cost method, which requires no sophisticated specialized setup. Coating of the substrate with a large surface area can be easily obtained by this technique compared to that in physical evaporation techniques and spray pyrolysis. Another advantage of the dip technique is that it is very easy to coat both sides of the substrate instead of only one and to deposit otherwise inaccessible surfaces. This method is well suited for applying coating on the inner and outer surfaces of tubes of various diameters and shapes. The main advantage of the dip coating method lies in the fact that it is possible to deposit a variety of layers having good homogeneity and mechanical and chemical stability with a very simple setup. In this paper, the CuO thin films preparation by dip coating method and their characterization will be presented.Keywords: absorber material, cupric oxide, dip coating, thin film
Procedia PDF Downloads 3091411 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM
Authors: Clement Leroy, Guillaume Boitel
Abstract:
This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery
Procedia PDF Downloads 2051410 Swastika Shape Multiband Patch Antenna for Wireless Applications on Low Cost Substrate
Authors: Md. Samsuzzaman, M. T. Islam, J. S. Mandeep, N. Misran
Abstract:
In this article, a compact simple structure modified Swastika shape patch multiband antenna on a substrate of available low cost polymer resin composite material is designed for Wi-Fi and WiMAX applications. The substrate material consists of an epoxy matrix reinforced by woven glass. The designed micro-strip line fed compact antenna comprises of a planar wide square slot ground with four slits and Swastika shape radiation patch with a rectangular slot. The effect of the different substrate materials on the reflection coefficients of the proposed antennas was also analyzed. It can be clearly seen that the proposed antenna provides a wider bandwidth and acceptable return loss value compared to other reported materials. The simulation results exhibits that the antenna has an impedance bandwidth with -10 dB return loss at 3.01-3.89 GHz and 4.88-6.10 GHz which can cover both the WLAN, WiMAX and public safety WLAN bands. The proposed swastika shape antenna was designed and analyzed by using a finite element method based simulator HFSS and designed on a low cost FR4 (polymer resin composite material) printed circuit board. The electrical performances and superior frequency characteristics make the proposed material antenna desirable for wireless communications.Keywords: epoxy resin polymer, multiband, swastika shaped, wide slot, WLAN/WiMAX
Procedia PDF Downloads 4521409 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber
Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap
Abstract:
Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber
Procedia PDF Downloads 1641408 Distraction from Pain: An fMRI Study on the Role of Age-Related Changes in Executive Functions
Authors: Katharina M. Rischer, Angelika Dierolf, Ana M. Gonzalez-Roldan, Pedro Montoya, Fernand Anton, Marian van der Meulen
Abstract:
Even though age has been associated with increased and prolonged episodes of pain, little is known about potential age-related changes in the ˈtop-downˈ modulation of pain, such as cognitive distraction from pain. The analgesic effects of distraction result from competition for attentional resources in the prefrontal cortex (PFC), a region that is also involved in executive functions. Given that the PFC shows pronounced age-related atrophy, distraction may be less effective in reducing pain in older compared to younger adults. The aim of this study was to investigate the influence of aging on task-related analgesia and the underpinning neural mechanisms, with a focus on the role of executive functions in distraction from pain. In a first session, 64 participants (32 young adults: 26.69 ± 4.14 years; 32 older adults: 68.28 ± 7.00 years) completed a battery of neuropsychological tests. In a second session, participants underwent a pain distraction paradigm, while fMRI images were acquired. In this paradigm, participants completed a low (0-back) and a high (2-back) load condition of a working memory task while receiving either warm or painful thermal stimuli to their lower arm. To control for age-related differences in sensitivity to pain and perceived task difficulty, stimulus intensity, and task speed were individually calibrated. Results indicate that both age groups showed significantly reduced activity in a network of regions involved in pain processing when completing the high load distraction task; however, young adults showed a larger neural distraction effect in different parts of the insula and the thalamus. Moreover, better executive functions, in particular inhibitory control abilities, were associated with a larger behavioral and neural distraction effect. These findings clearly demonstrate that top-down control of pain is affected in older age, and could explain the higher vulnerability for older adults to develop chronic pain. Moreover, our findings suggest that the assessment of executive functions may be a useful tool for predicting the efficacy of cognitive pain modulation strategies in older adults.Keywords: executive functions, cognitive pain modulation, fMRI, PFC
Procedia PDF Downloads 1441407 Policy Recommendations for Reducing CO2 Emissions in Kenya's Electricity Generation, 2015-2030
Authors: Paul Kipchumba
Abstract:
Kenya is an East African Country lying at the Equator. It had a population of 46 million in 2015 with an annual growth rate of 2.7%, making a population of at least 65 million in 2030. Kenya’s GDP in 2015 was about 63 billion USD with per capita GDP of about 1400 USD. The rural population is 74%, whereas urban population is 26%. Kenya grapples with not only access to energy but also with energy security. There is direct correlation between economic growth, population growth, and energy consumption. Kenya’s energy composition is at least 74.5% from renewable energy with hydro power and geothermal forming the bulk of it; 68% from wood fuel; 22% from petroleum; 9% from electricity; and 1% from coal and other sources. Wood fuel is used by majority of rural and poor urban population. Electricity is mostly used for lighting. As of March 2015 Kenya had installed electricity capacity of 2295 MW, making a per capital electricity consumption of 0.0499 KW. The overall retail cost of electricity in 2015 was 0.009915 USD/ KWh (KES 19.85/ KWh), for installed capacity over 10MW. The actual demand for electricity in 2015 was 3400 MW and the projected demand in 2030 is 18000 MW. Kenya is working on vision 2030 that aims at making it a prosperous middle income economy and targets 23 GW of generated electricity. However, cost and non-cost factors affect generation and consumption of electricity in Kenya. Kenya does not care more about CO2 emissions than on economic growth. Carbon emissions are most likely to be paid by future costs of carbon emissions and penalties imposed on local generating companies by sheer disregard of international law on C02 emissions and climate change. The study methodology was a simulated application of carbon tax on all carbon emitting sources of electricity generation. It should cost only USD 30/tCO2 tax on all emitting sources of electricity generation to have solar as the only source of electricity generation in Kenya. The country has the best evenly distributed global horizontal irradiation. Solar potential after accounting for technology efficiencies such as 14-16% for solar PV and 15-22% for solar thermal is 143.94 GW. Therefore, the paper recommends adoption of solar power for generating all electricity in Kenya in order to attain zero carbon electricity generation in the country.Keywords: co2 emissions, cost factors, electricity generation, non-cost factors
Procedia PDF Downloads 3651406 Development of Solid Electrolytes Based on Networked Cellulose
Authors: Boor Singh Lalia, Yarjan Abdul Samad, Raed Hashaikeh
Abstract:
Three different kinds of solid polymer electrolytes were prepared using polyethylene oxide (PEO) as a base polymer, networked cellulose (NC) as a physical support and LiClO4 as a conductive salt for the electrolytes. Networked cellulose, a modified form of cellulose, is a biodegradable and environmentally friendly additive which provides a strong fibrous networked support for structural stability of the electrolytes. Although the PEO/NC/LiClO4 electrolyte retains its structural integrity and mechanical properties at 100oC as compared to pristine PEO-based polymer electrolytes, it suffers from poor ionic conductivity. To improve the room temperature conductivity of the electrolyte, PEO is replaced by the polyethylene glycol (PEG) which is a liquid phase that provides high mobility for Li+ ions transport in the electrolyte. PEG/NC/LiClO4 shows improvement in ionic conductivity compared to PEO/NC/LiClO4 at room temperature, but it is brittle and tends to form cracks during processing. An advanced solid polymer electrolyte with optimum ionic conductivity and mechanical properties is developed by using a ternary system: TEGDME/PEO/NC+LiClO4. At room temperature, this electrolyte exhibits an ionic conductivity to the order of 10-5 S/cm, which is very high compared to that of the PEO/LiClO4 electrolyte. Pristine PEO electrolytes start melting at 65 °C and completely lose its mechanical strength. Dynamic mechanical analysis of TEGDME: PEO: NC (70:20:10 wt%) showed an improvement of storage modulus as compared to the pristine PEO in the 60–120 °C temperature range. Also, with an addition of NC, the electrolyte retains its mechanical integrity at 100 oC which is beneficial for Li-ion battery operation at high temperatures. Differential scanning calorimetry (DSC) and thermal gravimetry analysis (TGA) studies revealed that the ternary polymer electrolyte is thermally stable in the lithium ion battery operational temperature range. As-prepared polymer electrolyte was used to assemble LiFePO4/ TEGDME/PEO/NC+LiClO4/Li half cells and their electrochemical performance was studied via cyclic voltammetry and charge-discharge cycling.Keywords: solid polymer electrolyte, ionic conductivity, mechanical properties, lithium ion batteries, cyclic voltammetry
Procedia PDF Downloads 4291405 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights
Authors: Nelson Bii, Christopher Ouma, John Odhiambo
Abstract:
Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths
Procedia PDF Downloads 1391404 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder
Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen
Abstract:
Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.Keywords: count data, meta-analytic prior, negative binomial, poisson
Procedia PDF Downloads 1171403 Bioeconomic Modelling for Barramundi (Lates calcarifer) in Queensland: Implications for Recreational Fishing Following Recent Gill Netting Closures
Authors: Sabiha S. Marine, Nicole Flint, John Rolfe
Abstract:
The Queensland state government introduced commercial gill net fishing closures in Cairns, Mackay, and Rockhampton in November 2015 to increase the recreational fishing opportunities, nature-based tourism, and economic benefits in these three regional areas. This management change is likely to improve the potential for more desirable stock structures through natural recruitment. Barramundi (Lates calcarifer) is one of the popular target fish for recreational and commercial fishers in Northern Australia. This investigation examines the effects of reduced commercial fishing from both biological and economic perspectives, particularly on the local Barramundi population of the Fitzroy River in Rockhampton, the largest river catchment flowing to the eastern coast of Australia. Data on different parameters of biological and economic aspects have been collated from secondary sources for analysis through a system simulation approach to identify the effectiveness of the commercial netting closures on recreational fishing effort, especially for the Barramundi population. The results have the potential to explain certain consequences of the netting closures in Queensland, which could serve to inform future fisheries management decisions. The study output as a whole will help in the better management of fisheries resources by evaluating recreational fishing opportunities in Queensland, where the potential for increases in recreation is high.Keywords: Barramundi, bioeconomic model, fishery management, recreational fishing
Procedia PDF Downloads 1661402 A Micro-Scale of Electromechanical System Micro-Sensor Resonator Based on UNO-Microcontroller for Low Magnetic Field Detection
Authors: Waddah Abdelbagi Talha, Mohammed Abdullah Elmaleeh, John Ojur Dennis
Abstract:
This paper focuses on the simulation and implementation of a resonator micro-sensor for low magnetic field sensing based on a U-shaped cantilever and piezoresistive configuration, which works based on Lorentz force physical phenomena. The resonance frequency is an important parameter that depends upon the highest response and sensitivity through the frequency domain (frequency response) of any vibrated micro-scale of an electromechanical system (MEMS) device. And it is important to determine the direction of the detected magnetic field. The deflection of the cantilever is considered for vibrated mode with different frequencies in the range of (0 Hz to 7000 Hz); for the purpose of observing the frequency response. A simple electronic circuit-based polysilicon piezoresistors in Wheatstone's bridge configuration are used to transduce the response of the cantilever to electrical measurements at various voltages. Microcontroller-based Arduino program and PROTEUS electronic software are used to analyze the output signals from the sensor. The highest output voltage amplitude of about 4.7 mV is spotted at about 3 kHz of the frequency domain, indicating the highest sensitivity, which can be called resonant sensitivity. Based on the resonant frequency value, the mode of vibration is determined (up-down vibration), and based on that, the vector of the magnetic field is also determined.Keywords: resonant frequency, sensitivity, Wheatstone bridge, UNO-microcontroller
Procedia PDF Downloads 1271401 Methylglyoxal Induced Glycoxidation of Human Low Density Lipoprotein: A Biophysical Perspective and Its Role in Diabetes and Periodontitis
Authors: Minhal Abidi, Moinuddin
Abstract:
Diabetes mellitus (DM) induced metabolic abnormalities causes oxidative stress which leads to the pathogenesis of complications associated with diabetes like retinopathy, nephropathy periodontitis etc. Combination of glycation and oxidation 'glycoxidation' occurs when oxidative reactions affect the early state of glycation products. Low density lipoprotein (LDL) is prone to glycoxidative attack by sugars and methylglyoxal (MGO) being a strong glycating agent may have severe impact on its structure and consequent role in diabetes. Pro-inflammatory cytokines like IL1β and TNFα produced by the action of gram negative bacteria in periodontits (PD) can in turn lead to insulin resistance. This work discusses modifications to LDL as a result of glycoxidation. The changes in the protein molecule have been characterized by various physicochemical techniques and the immunogenicity of the modified molecules was also evaluated as they presented neo-epitopes. Binding of antibodies present in diabetes patients to the native and glycated LDL has been evaluated. Role of modified epitopes in the generation of antibodies in diabetes and periodontitis has been discussed. The structural perturbations induced in LDL were analyzed by UV–Vis, fluorescence, circular dichroism and FTIR spectroscopy, molecular docking studies, thermal denaturation studies, Thioflavin T assay, isothermal titration calorimetry, comet assay. MALDI-TOF, ketoamine moieties, carbonyl content and HMF content were also quantitated in native and glycated LDL. IL1β and TNFα levels were also measured in the type 2 DM and PD patients. We report increased carbonyl content, ketoamine moieties and HMF content in glycated LDL as compared to native analogue. The results substantiate that in hyperglycemic state MGO modification of LDL causes structural perturbations making the protein antigenic which could obstruct normal physiological functions and might contribute in the development of secondary complications in diabetic patients like periodontitis.Keywords: advanced glycation end products, diabetes mellitus, glycation, glycoxidation, low density lipoprotein, periodontitis
Procedia PDF Downloads 1911400 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 4421399 Investigating the Shear Behaviour of Fouled Ballast Using Discrete Element Modelling
Authors: Ngoc Trung Ngo, Buddhima Indraratna, Cholachat Rujikiathmakjornr
Abstract:
For several hundred years, the design of railway tracks has practically remained unchanged. Traditionally, rail tracks are placed on a ballast layer due to several reasons, including economy, rapid drainage, and high load bearing capacity. The primary function of ballast is to distributing dynamic track loads to sub-ballast and subgrade layers, while also providing lateral resistance and allowing for rapid drainage. Upon repeated trainloads, the ballast becomes fouled due to ballast degradation and the intrusion of fines which adversely affects the strength and deformation behaviour of ballast. This paper presents the use of three-dimensional discrete element method (DEM) in studying the shear behaviour of the fouled ballast subjected to direct shear loading. Irregularly shaped particles of ballast were modelled by grouping many spherical balls together in appropriate sizes to simulate representative ballast aggregates. Fouled ballast was modelled by injecting a specified number of miniature spherical particles into the void spaces. The DEM simulation highlights that the peak shear stress of the ballast assembly decreases and the dilation of fouled ballast increases with an increase level of fouling. Additionally, the distributions of contact force chain and particle displacement vectors were captured during shearing progress, explaining the formation of shear band and the evolutions of volumetric change of fouled ballast.Keywords: railway ballast, coal fouling, discrete element modelling, discrete element method
Procedia PDF Downloads 4511398 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches
Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.
Abstract:
A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency
Procedia PDF Downloads 1471397 Numerical Investigation of the Bio-fouling Roughness Effect on Tidal Turbine
Authors: O. Afshar
Abstract:
Unlike other renewable energy sources, tidal current energy is an extremely reliable, predictable and continuous energy source as the current pattern and speed can be predicted throughout the year. A key concern associated with tidal turbines is their long-term reliability when operating in the hostile marine environment. Bio-fouling changes the physical shape and roughness of turbine components, hence altering the overall turbine performance. This paper seeks to employ Computational Fluid Dynamics (CFD) method to quantify the effects of this problem based on the obtained flow field information. The simulation is carried out on a NACA 63-618 aerofoil. The Reynolds Averaged Navier-Stokes (RANS) equations with Shear Stress Transport (SST) turbulent model are used to simulate the flow around the model. Different levels of fouling are studied on 2D aerofoil surface with quantified fouling height and density. In terms of lift and drag coefficient results, numerical results show good agreement with the experiment which was carried out in wind tunnel. Numerical results of research indicate that an increase in fouling thickness causes an increase in drag coefficient and a reduction in lift coefficient. Moreover, pressure gradient gradually becomes adverse as height of fouling increases. In addition, result by turbulent kinetic energy contour reveals it increases with fouling height and it extends into wake due to flow separation.Keywords: tidal energy, lift coefficient, drag coefficient, roughness
Procedia PDF Downloads 3821396 Evaluation of Deformation for Deep Excavations in the Greater Vancouver Area Through Case Studies
Authors: Boris Kolev, Matt Kokan, Mohammad Deriszadeh, Farshid Bateni
Abstract:
Due to the increasing demand for real estate and the need for efficient land utilization in Greater Vancouver, developers have been increasingly considering the construction of high-rise structures with multiple below-grade parking. The temporary excavations required to allow for the construction of underground levels have recently reached up to 40 meters in depth. One of the challenges with deep excavations is the prediction of wall displacements and ground settlements due to their effect on the integrity of City utilities, infrastructure, and adjacent buildings. A large database of survey monitoring data has been collected for deep excavations in various soil conditions and shoring systems. The majority of the data collected is for tie-back anchors and shotcrete lagging systems. The data were categorized, analyzed and the results were evaluated to find a relationship between the most dominant parameters controlling the displacement, such as depth of excavation, soil properties, and the tie-back anchor loading and arrangement. For a select number of deep excavations, finite element modeling was considered for analyses. The lateral displacements from the simulation results were compared to the recorded survey monitoring data. The study concludes with a discussion and comparison of the available empirical and numerical modeling methodologies for evaluating lateral displacements in deep excavations.Keywords: deep excavations, lateral displacements, numerical modeling, shoring walls, tieback anchors
Procedia PDF Downloads 1811395 Analysis of Nonlinear Dynamic Systems Excited by Combined Colored and White Noise Excitations
Authors: Siu-Siu Guo, Qingxuan Shi
Abstract:
In this paper, single-degree-of-freedom (SDOF) systems to white noise and colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis.Keywords: filtered noise, narrow-banded noise, nonlinear dynamic, random vibration
Procedia PDF Downloads 2251394 Gan Nanowire-Based Sensor Array for the Detection of Cross-Sensitive Gases Using Principal Component Analysis
Authors: Ashfaque Hossain Khan, Brian Thomson, Ratan Debnath, Abhishek Motayed, Mulpuri V. Rao
Abstract:
Though the efforts had been made, the problem of cross-sensitivity for a single metal oxide-based sensor can’t be fully eliminated. In this work, a sensor array has been designed and fabricated comprising of platinum (Pt), copper (Cu), and silver (Ag) decorated TiO2 and ZnO functionalized GaN nanowires using industry-standard top-down fabrication approach. The metal/metal-oxide combinations within the array have been determined from prior molecular simulation study using first principle calculations based on density functional theory (DFT). The gas responses were obtained for both single and mixture of NO2, SO2, ethanol, and H2 in the presence of H2O and O2 gases under UV light at room temperature. Each gas leaves a unique response footprint across the array sensors by which precise discrimination of cross-sensitive gases has been achieved. An unsupervised principal component analysis (PCA) technique has been implemented on the array response. Results indicate that each gas forms a distinct cluster in the score plot for all the target gases and their mixtures, indicating a clear separation among them. In addition, the developed array device consumes very low power because of ultra-violet (UV) assisted sensing as compared to commercially available metal-oxide sensors. The nanowire sensor array, in combination with PCA, is a potential approach for precise real-time gas monitoring applications.Keywords: cross-sensitivity, gas sensor, principle component analysis (PCA), sensor array
Procedia PDF Downloads 1071393 Influential Health Care System Rankings Can Conceal Maximal Inequities: A Simulation Study
Authors: Samuel Reisman
Abstract:
Background: Comparative rankings are increasingly used to evaluate health care systems. These rankings combine discrete attribute rankings into a composite overall ranking. Health care equity is a component of overall rankings, but excelling in other categories can counterbalance low inequity grades. Highly ranked inequitable health care would commend systems that disregard human rights. We simulated the ranking of a maximally inequitable health care system using a published, influential ranking methodology. Methods: We used The Commonwealth Fund’s ranking of eleven health care systems to simulate the rank of a maximally inequitable system. Eighty performance indicators were simulated, assuming maximal ineptitude in equity benchmarks. Maximal rankings in all non-equity subcategories were assumed. Subsequent stepwise simulations lowered all non-equity rank positions by one. Results: The maximally non-equitable health care system ranked first overall. Three subsequent stepwise simulations, lowering non-equity rankings by one, each resulted in an overall ranking within the top three. Discussion: Our results demonstrate that grossly inequitable health care systems can rank highly in comparative health care system rankings. These findings challenge the validity of ranking methodologies that subsume equity under broader benchmarks. We advocate limiting maximum overall rankings of health care systems to their individual equity rankings. Such limits are logical given the insignificance of health care system improvements to those lacking adequate health care.Keywords: global health, health equity, healthcare systems, international health
Procedia PDF Downloads 4001392 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor
Authors: Ejaz Ahmed, Huang Yong
Abstract:
The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.Keywords: CFD, combustion, gas turbine combustor, lean blowout
Procedia PDF Downloads 2671391 A Systematic Approach to Mitigate the Impact of Increased Temperature and Air Pollution in Urban Settings
Authors: Samain Sabrin, Joshua Pratt, Joshua Bryk, Maryam Karimi
Abstract:
Globally, extreme heat events have led to a surge in the number of heat-related moralities. These incidents are further exacerbated in high-density population centers due to the Urban Heat Island (UHI) effect. Varieties of anthropogenic activities such as unsupervised land surface modifications, expansion of impervious areas, and lack of use of vegetation are all contributors to an increase in the amount of heat flux trapped by an urban canopy which intensifies the UHI effect. This project aims to propose a systematic approach to measure the impact of air quality and increased temperature based on urban morphology in the selected metropolitan cities. This project will measure the impact of build environment for urban and regional planning using human biometeorological evaluations (mean radiant temperature, Tmrt). We utilized the Rayman model (capable of calculating short and long wave radiation fluxes affecting the human body) to estimate the Tmrt in an urban environment incorporating location and height of buildings and trees as a supplemental tool in urban planning, and street design. Our current results suggest a strong correlation between building height and increased surface temperature in megacities. This model will help with; 1. Quantify the impacts of the built environment and surface properties on surrounding temperature, 2. Identify priority urban neighborhoods by analyzing Tmrt and air quality data at pedestrian level, 3. Characterizing the need for urban green infrastructure or better urban planning- maximizing the cooling benefit from existing Urban Green Infrastructure (UGI), and 4. Developing a hierarchy of streets for new UGI integration and propose new UGI based on site characteristics and cooling potential.Keywords: air quality, heat mitigation, human-biometeorological indices, increased temperature, mean radiant temperature, radiation flux, sustainable development, thermal comfort, urban canopy, urban planning
Procedia PDF Downloads 1411390 An Efficient Propensity Score Method for Causal Analysis With Application to Case-Control Study in Breast Cancer Research
Authors: Ms Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner
Abstract:
Propensity score (PS) methods have recently become the standard analysis as a tool for the causal inference in the observational studies where exposure is not randomly assigned, thus, confounding can impact the estimation of treatment effect on the outcome. For the binary outcome, the effect of treatment on the outcome can be estimated by odds ratios, relative risks, and risk differences. However, using the different PS methods may give you a different estimation of the treatment effect on the outcome. Several methods of PS analyses have been used mainly, include matching, inverse probability of weighting, stratification, and covariate adjusted on PS. Due to the dangers of discretizing continuous variables (exposure, covariates), the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect (ATE) utilizing the stratification of PS method. Therefore, we are trying to avoid choosing arbitrary cut-points, instead, we continuously discretize the PS and accumulate information across all cut-points for inferences. We will use Monte Carlo simulation to evaluate ATE, focusing on two PS methods, stratification and covariate adjusted on PS. We will then show how this can be observed based on the analyses of the data from a case-control study of breast cancer, the Polish Women’s Health Study.Keywords: average treatment effect, propensity score, stratification, covariate adjusted, monte Calro estimation, breast cancer, case_control study
Procedia PDF Downloads 1051389 Experimental Assessment of a Grid-Forming Inverter in Microgrid Islanding Operation Mode
Authors: Dalia Salem, Detlef Schulz
Abstract:
As Germany pursues its ambitious plan towards a power system based on renewable energy sources, the necessity to establish steady, robust microgrids becomes more evident. Inside the microgrid, there is at least one grid-forming inverter responsible for generating the coupling voltage and stabilizing the system frequency within the standardized accepted limits when the microgrid is forced to operate as a stand-alone power system. Grid-forming control for distributed inverters is required to enable steady control of a low-inertia power system. In this paper, a designed droop control technique is tested at the controller of an inverter as a component of a hardware test bed to understand the microgrid behavior in two modes of operation: i) grid-connected and ii) operating in islanding mode. This droop technique includes many current and voltage inner control loops, where the Q-V and P-f droop provide the required terminal output voltage and frequency. The technique is tested first in a simulation model of the inverter in MATLAB/SIMULINK, and the results are compared to the results of the hardware laboratory test. The results of this experiment illuminate the pivotal role of the grid-forming inverter in facilitating microgrid resilience during grid disconnection events and how microgrids could provide the functionality formerly provided by synchronous machinery, such as the black start process.Keywords: microgrid, grid-forming inverters, droop-control, islanding-operation
Procedia PDF Downloads 701388 Characterization of Ethanol-Air Combustion in a Constant Volume Combustion Bomb Under Cellularity Conditions
Authors: M. Reyes, R. Sastre, P. Gabana, F. V. Tinaut
Abstract:
In this work, an optical characterization of the ethanol-air laminar combustion is presented in order to investigate the origin of the instabilities developed during the combustion, the onset of the cellular structure and the laminar burning velocity. Experimental tests of ethanol-air have been developed in an optical cylindrical constant volume combustion bomb equipped with a Schlieren technique to record the flame development and the flame front surface wrinkling. With this procedure, it is possible to obtain the flame radius and characterize the time when the instabilities are visible through the cell's apparition and the cellular structure development. Ethanol is an aliphatic alcohol with interesting characteristics to be used as a fuel in Internal Combustion Engines and can be biologically synthesized from biomass. Laminar burning velocity is an important parameter used in simulations to obtain the turbulent flame speed, whereas the flame front structure and the instabilities developed during the combustion are important to understand the transition to turbulent combustion and characterize the increment in the flame propagation speed in premixed flames. The cellular structure is spontaneously generated by volume forces, diffusional-thermal and hydrodynamic instabilities. Many authors have studied the combustion of ethanol air and mixtures of ethanol with other fuels. However, there is a lack of works that investigate the instabilities and the development of a cellular structure in ethanol flames, a few works as characterized the ethanol-air combustion instabilities in spherical flames. In the present work, a parametrical study is made by varying the fuel/air equivalence ratio (0.8-1.4), initial pressure (0.15-0.3 MPa) and initial temperature (343-373K), using a design of experiments type I-optimal. In reach mixtures, it is possible to distinguish the cellular structure formed by the hydrodynamic effect and by from the thermo-diffusive. Results show that ethanol-air flames tend to stabilize as the equivalence ratio decreases in lean mixtures and develop a cellular structure with the increment of initial pressure and temperature.Keywords: ethanol, instabilities, premixed combustion, schlieren technique, cellularity
Procedia PDF Downloads 661387 Study on the Stability of Large Space Expandable Parabolic Cylindrical Antenna
Authors: Chuanzhi Chen, Wenjing Yu
Abstract:
Parabolic cylindrical deployable antenna has the characteristics of wide cutting width, strong directivity, high gain, and easy automatic beam scanning. While, due to its large size, high flexibility, and strong coupling, the deployment process of parabolic cylindrical deployable antenna presents such problems as unsynchronized deployment speed, large local deformation and discontinuous switching of deployment state. A large deployable parabolic cylindrical antenna is taken as the research object, and the problem of unfolding process instability of cylindrical antenna is studied in the paper, which is caused by multiple factors such as multiple closed loops, elastic deformation, motion friction, and gap collision. Firstly, the multi-flexible system dynamics model of large-scale parabolic cylindrical antenna is established to study the influence of friction and elastic deformation on the stability of large multi-closed loop antenna. Secondly, the evaluation method of antenna expansion stability is studied, and the quantitative index of antenna configuration design is proposed to provide a theoretical basis for improving the overall performance of the antenna. Finally, through simulation analysis and experiment, the development dynamics and stability of large-scale parabolic cylindrical antennas are verified by in-depth analysis, and the principles for improving the stability of antenna deployment are summarized.Keywords: multibody dynamics, expandable parabolic cylindrical antenna, stability, flexible deformation
Procedia PDF Downloads 1461386 Experimental Study of the Efficacy and Emission Properties of a Compression Ignition Engine Running on Fuel Additives with Varying Engine Loads
Authors: Faisal Mahroogi, Mahmoud Bady, Yaser H. Alahmadi, Ahmed Alsisi, Sunny Narayan, Muhammad Usman Kaisan
Abstract:
The Kingdom of Saudi Arabia established Saudi Vision 2030, an initiative of the government with the goal of promoting more socioeconomic as well as cultural diversity. The kingdom, which is dedicated to sustainable development and clean energy, uses cutting-edge approaches to address energy-related issues, including the circular carbon economy (CCE) and a more varied energy mix. In order for Saudi Arabia to achieve its Vision 2030 goal of having a net zero future by 2060, sustainability is essential. By addressing the energy and climate issues of the modern world with responsibility and innovation, Vision 2030 is turning into a global role model for the transition to a sustainable future. As per the Ambitions of the National Environment Strategy of the Saudi Ministry of Environment, Agriculture, and Water (MEWA), raising environmental compliance across all sectors and reducing pollution and adverse environmental impacts are critical focus areas. As a result, the current study presents an experimental analysis of the performance and exhaust emissions of a diesel engine running mostly on waste cooking oil (WCO). A one-cylinder direct-injection diesel engine with constant speed and natural aspiration is the engine type utilized. Research was done on how the engine performed and emission parameters when fueled with a mixture of 10% butanol, 10% diesel, 10% WCO, and 10% diethyl ether (D70B10W10DD10). The study's findings demonstrated that engine emissions of nitrogen oxides (NOX) and carbon monoxide (CO) varied significantly depending on the load being applied. The brake thermal efficiency, cylinder pressure, and the brake power of the engine were all impacted by load change.Keywords: ICE, waste cooking oil, fuel additives, butanol, combustion, emission characteristics
Procedia PDF Downloads 621385 Analysis of Cascade Control Structure in Train Dynamic Braking System
Authors: B. Moaveni, S. Morovati
Abstract:
In recent years, increasing the usage of railway transportations especially in developing countries caused more attention to control systems railway vehicles. Consequently, designing and implementing the modern control systems to improve the operating performance of trains and locomotives become one of the main concerns of researches. Dynamic braking systems is an important safety system which controls the amount of braking torque generated by traction motors, to keep the adhesion coefficient between the wheel-sets and rail road in optimum bound. Adhesion force has an important role to control the braking distance and prevent the wheels from slipping during the braking process. Cascade control structure is one of the best control methods for the wide range of industrial plants in the presence of disturbances and errors. This paper presents cascade control structure based on two forward simple controllers with two feedback loops to control the slip ratio and braking torque. In this structure, the inner loop controls the angular velocity and the outer loop control the longitudinal velocity of the locomotive that its dynamic is slower than the dynamic of angular velocity. This control structure by controlling the torque of DC traction motors, tries to track the desired velocity profile to access the predefined braking distance and to control the slip ratio. Simulation results are employed to show the effectiveness of the introduced methodology in dynamic braking system.Keywords: cascade control, dynamic braking system, DC traction motors, slip control
Procedia PDF Downloads 365