Search results for: code blue simulation module
762 Analysis of Rockfall Hazard along Himalayan Road Cut Slopes
Authors: Sarada Prasad Pradhan, Vikram Vishal, Tariq Siddique
Abstract:
With a vast area of India comprising of hilly terrain and road cut slopes, landslides and rockfalls are a common phenomenon. However, while landslide studies have received much attention in the past in India, very little literature and analysis is available regarding rockfall hazard of many rockfall prone areas, specifically in Uttarakhand Himalaya, India. The subsequent lack of knowledge and understanding of the rockfall phenomenon as well as frequent incidences of rockfall led fatalities urge the necessity of conducting site-specific rockfall studies to highlight the importance of addressing this issue as well as to provide data for safe design of preventive structures. The present study has been conducted across 10 rockfall prone road cut slopes for a distance of 15 km starting from Devprayag, India along National Highway 58 (NH-58). In order to make a qualitative assessment of Rockfall Hazard posed by these slopes, Rockfall Hazard Rating using standards for Indian Rockmass has been conducted at 10 locations under different slope conditions. Moreover, to accurately predict the characteristics of the possible rockfall phenomenon, numerical simulation was carried out to calculate the maximum bounce heights, total kinetic energies, translational velocities and trajectories of the falling rockmass blocks when simulated on each of these slopes according to real-life conditions. As it was observed that varying slope geometry had more fatal impacts on Rockfall hazard than size of rock masses, several optimizations have been suggested for each slope regarding location of barriers and modification of slope geometries in order to minimize damage by falling rocks. This study can be extremely useful in emphasizing the significance of rockfall studies and construction of mitigative barriers and structures along NH-58 around Devprayag.Keywords: rockfall, slope stability, rockmass, hazard
Procedia PDF Downloads 207761 Influence of Infinite Elements in Vibration Analysis of High-Speed Railway Track
Authors: Janaki Rama Raju Patchamatla, Emani Pavan Kumar
Abstract:
The idea of increasing the existing train speeds and introduction of the high-speed trains in India as a part of Vision-2020 is really challenging from both economic viability and technical feasibility. More than economic viability, technical feasibility has to be thoroughly checked for safe operation and execution. Trains moving at high speeds need a well-established firm and safe track thoroughly tested against vibration effects. With increased speeds of trains, the track structure and layered soil-structure interaction have to be critically assessed for vibration and displacements. Physical establishment of track, testing and experimentation is a costly and time taking process. Software-based modelling and simulation give relatively reliable, cost-effective means of testing effects of critical parameters like sleeper design and density, properties of track and sub-grade, etc. The present paper reports the applicability of infinite elements in reducing the unrealistic stress-wave reflections from so-called soil-structure interface. The influence of the infinite elements is quantified in terms of the displacement time histories of adjoining soil and the deformation pattern in general. In addition, the railhead response histories at various locations show that the numerical model is realistic without any aberrations at the boundaries. The numerical model is quite promising in its ability to simulate the critical parameters of track design.Keywords: high speed railway track, finite element method, Infinite elements, vibration analysis, soil-structure interface
Procedia PDF Downloads 270760 Compact Dual-band 4-MIMO Antenna Elements for 5G Mobile Applications
Authors: Fayad Ghawbar
Abstract:
The significance of the Multiple Input Multiple Output (MIMO) system in the 5G wireless communication system is essential to enhance channel capacity and provide a high data rate resulting in a need for dual-polarization in vertical and horizontal. Furthermore, size reduction is critical in a MIMO system to deploy more antenna elements requiring a compact, low-profile design. A compact dual-band 4-MIMO antenna system has been presented in this paper with pattern and polarization diversity. The proposed single antenna structure has been designed using two antenna layers with a C shape in the front layer and a partial slot with a U-shaped cut in the ground to enhance isolation. The single antenna is printed on an FR4 dielectric substrate with an overall size of 18 mm×18 mm×1.6 mm. The 4-MIMO antenna elements were printed orthogonally on an FR4 substrate with a size dimension of 36 × 36 × 1.6 mm3 with zero edge-to-edge separation distance. The proposed compact 4-MIMO antenna elements resonate at 3.4-3.6 GHz and 4.8-5 GHz. The s-parameters measurement and simulation results agree, especially in the lower band with a slight frequency shift of the measurement results at the upper band due to fabrication imperfection. The proposed design shows isolation above -15 dB and -22 dB across the 4-MIMO elements. The MIMO diversity performance has been evaluated in terms of efficiency, ECC, DG, TARC, and CCL. The total and radiation efficiency were above 50 % across all parameters in both frequency bands. The ECC values were lower than 0.10, and the DG results were about 9.95 dB in all antenna elements. TARC results exhibited values lower than 0 dB with values lower than -25 dB in all MIMO elements at the dual-bands. Moreover, the channel capacity losses in the MIMO system were depicted using CCL with values lower than 0.4 Bits/s/Hz.Keywords: compact antennas, MIMO antenna system, 5G communication, dual band, ECC, DG, TARC
Procedia PDF Downloads 141759 Assessment of Climate Change Impact on Meteorological Droughts
Authors: Alireza Nikbakht Shahbazi
Abstract:
There are various factors that affect climate changes; drought is one of those factors. Investigation of efficient methods for estimating climate change impacts on drought should be assumed. The aim of this paper is to investigate climate change impacts on drought in Karoon3 watershed located south-western Iran in the future periods. The atmospheric general circulation models (GCM) data under Intergovernmental Panel on Climate Change (IPCC) scenarios should be used for this purpose. In this study, watershed drought under climate change impacts will be simulated in future periods (2011 to 2099). Standard precipitation index (SPI) as a drought index was selected and calculated using mean monthly precipitation data in Karoon3 watershed. SPI was calculated in 6, 12 and 24 months periods. Statistical analysis on daily precipitation and minimum and maximum daily temperature was performed. LRAS-WG5 was used to determine the feasibility of future period's meteorological data production. Model calibration and verification was performed for the base year (1980-2007). Meteorological data simulation for future periods under General Circulation Models and climate change IPCC scenarios was performed and then the drought status using SPI under climate change effects analyzed. Results showed that differences between monthly maximum and minimum temperature will decrease under climate change and spring precipitation shall increase while summer and autumn rainfall shall decrease. The precipitation occurs mainly between January and May in future periods and summer or autumn precipitation decline and lead up to short term drought in the study region. Normal and wet SPI category is more frequent in B1 and A2 emissions scenarios than A1B.Keywords: climate change impact, drought severity, drought frequency, Karoon3 watershed
Procedia PDF Downloads 239758 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling
Authors: Masoud Safdari, Jacob Fish
Abstract:
Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.Keywords: atomistic, continuum, coupling, multiscale
Procedia PDF Downloads 175757 Seepage Analysis through Earth Dam Embankment: Case Study of Batu Dam
Authors: Larifah Mohd Sidik, Anuar Kasa
Abstract:
In recent years, the demands for raw water are increasing along with the growth of the economy and population. Hence, the need for the construction and operation of dams is one of the solutions for the management of water resources problems. The stability of the embankment should be taken into consideration to evaluate the safety of retaining water. The safety of the dam is mostly based on numerous measurable components, for instance, seepage flowrate, pore water pressure and deformation of the embankment. Seepage and slope stability is the primary and most important reason to ascertain the overall safety behavior of the dams. This research study was conducted to evaluate static condition seepage and slope stability performances of Batu dam which is located in Kuala Lumpur capital city. The numerical solution Geostudio-2012 software was employed to analyse the seepage using finite element method, SEEP/W and slope stability using limit equilibrium method, SLOPE/W for three different cases of reservoir level operations; normal and flooded condition. Results of seepage analysis using SEEP/W were utilized as parental input for the analysis of SLOPE/W. Sensitivity analysis on hydraulic conductivity of material was done and calibrated to minimize the relative error of simulation SEEP/W, where the comparison observed field data and predicted value were also carried out. In seepage analysis, such as leakage flow rate, pore water distribution and location of a phreatic line are determined using the SEEP/W. The result of seepage analysis shows the clay core effectively lowered the phreatic surface and no piping failure is shown in the result. Hence, the total seepage flux was acceptable and within the permissible limit.Keywords: earth dam, dam safety, seepage, slope stability, pore water pressure
Procedia PDF Downloads 218756 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 414755 Determination of Optimum Conditions for the Leaching of Oxidized Copper Ores with Ammonium Nitrate
Authors: Javier Paul Montalvo Andia, Adriana Larrea Valdivia, Adolfo Pillihuaman Zambrano
Abstract:
The most common lixiviant in the leaching process of copper minerals is H₂SO₄, however, the current situation requires more environmentally friendly reagents and in certain situations that have a lower consumption due to the presence of undesirable gangue as muscovite or kaolinite that can make the process unfeasible. The present work studied the leaching of an oxidized copper mineral in an aqueous solution of ammonium nitrate, in order to obtain the optimum leaching conditions of the copper contained in the malachite mineral from Peru. The copper ore studied comes from a deposit in southern Peru and was characterized by X-ray diffractometer, inductively coupled-plasma emission spectrometer (ICP-OES) and atomic absorption spectrophotometry (AAS). The experiments were developed in batch reactor of 600 mL where the parameters as; temperature, pH, ammonium nitrate concentration, particle size and stirring speed were controlled according to experimental planning. The sample solution was analyzed for copper by atomic absorption spectrophotometry (AAS). A simulation in the HSC Chemistry 6.0 program showed that the predominance of the copper compounds of a Cu-H₂O aqueous system is altered by the presence in the system of ammonium complexes, the compound being thermodynamically more stable Cu(NH3)₄²⁺, which predominates in pH ranges from 8.5 to 10 at a temperature of 25 °C. The optimum conditions for copper leaching of the malachite mineral were a stirring speed of 600 rpm, an ammonium nitrate concentration of 4M, a particle diameter of 53 um and temperature of 62 °C. These results showed that the leaching of copper increases with increasing concentration of the ammonium solution, increasing the stirring rate, increasing the temperature and decreasing the particle diameter. Finally, the recovery of copper in optimum conditions was above 80%.Keywords: ammonium nitrate, malachite, copper oxide, leaching
Procedia PDF Downloads 188754 Numerical Assessment of Fire Characteristics with Bodies Engulfed in Hydrocarbon Pool Fire
Authors: Siva Kumar Bathina, Sudheer Siddapureddy
Abstract:
Fires accident becomes even worse when the hazardous equipment like reactors or radioactive waste packages are engulfed in fire. In this work, large-eddy numerical fire simulations are performed using fire dynamic simulator to predict the thermal behavior of such bodies engulfed in hydrocarbon pool fires. A radiatively dominated 0.3 m circular burner with n-heptane as the fuel is considered in this work. The fire numerical simulation results without anybody inside the fire are validated with the reported experimental data. The comparison is in good agreement for different flame properties like predicted mass burning rate, flame height, time-averaged center-line temperature, time-averaged center-line velocity, puffing frequency, the irradiance at the surroundings, and the radiative heat feedback to the pool surface. Cask of different sizes is simulated with SS304L material. The results are independent of the material of the cask simulated as the adiabatic surface temperature concept is employed in this study. It is observed that the mass burning rate increases with the blockage ratio (3% ≤ B ≤ 32%). However, the change in this increment is reduced at higher blockage ratios (B > 14%). This is because the radiative heat feedback to the fuel surface is not only from the flame but also from the cask volume. As B increases, the volume of the cask increases and thereby increases the radiative contribution to the fuel surface. The radiative heat feedback in the case of the cask engulfed in the fire is increased by 2.5% to 31% compared to the fire without cask.Keywords: adiabatic surface temperature, fire accidents, fire dynamic simulator, radiative heat feedback
Procedia PDF Downloads 124753 Mathematical Modelling of Blood Flow with Magnetic Nanoparticles as Carrier for Targeted Drug Delivery in a Stenosed Artery
Authors: Sreeparna Majee, G. C. Shit
Abstract:
A study on targeted drug delivery is carried out in an unsteady flow of blood infused with magnetic NPs (nanoparticles) with an aim to understand the flow pattern and nanoparticle aggregation in a diseased arterial segment having stenosis. The magnetic NPs are supervised by the magnetic field which is significant for therapeutic treatment of arterial diseases, tumor and cancer cells and removing blood clots. Coupled thermal energy have also been analyzed by considering dissipation of energy because of the application of the magnetic field and the viscosity of blood. Simulation technique used to solve the mathematical model is vorticity-stream function formulations in the diseased artery. An elevation in SLP (Specific loss power) is noted in the aortic bloodstream when the agglomeration of nanoparticles is higher. This phenomenon has potential application in the treatment of hyperthermia. The study focuses on the lowering of WSS (Wall Shear Stress) with increasing particle concentration at the downstream of the stenosis which depicts the vigorous flow circulation zone. These low shear stress regions prolong the residing time of the nanoparticles carrying drugs which soaks up the LDL (Low Density Lipoprotein) deposition. Moreover, an increase in NP concentration enhances the Nusselt number which marks the increase of heat transfer from the arterial wall to the surrounding tissues to destroy tumor and cancer cells without affecting the healthy cells. The results have a significant influence in the study of medicine, to treat arterial diseases such as atherosclerosis without the need for surgery which can minimize the expenditures on cardiovascular treatments.Keywords: magnetic nanoparticles, blood flow, atherosclerosis, hyperthermia
Procedia PDF Downloads 140752 Clean Sky 2 Project LiBAT: Light Battery Pack for High Power Applications in Aviation – Simulation Methods in Early Stage Design
Authors: Jan Dahlhaus, Alejandro Cardenas Miranda, Frederik Scholer, Maximilian Leonhardt, Matthias Moullion, Frank Beutenmuller, Julia Eckhardt, Josef Wasner, Frank Nittel, Sebastian Stoll, Devin Atukalp, Daniel Folgmann, Tobias Mayer, Obrad Dordevic, Paul Riley, Jean-Marc Le Peuvedic
Abstract:
Electrical and hybrid aerospace technologies pose very challenging demands on the battery pack – especially with respect to weight and power. In the Clean Sky 2 research project LiBAT (funded by the EU), the consortium is currently building an ambitious prototype with state-of-the art cells that shows the potential of an intelligent pack design with a high level of integration, especially with respect to thermal management and power electronics. For the latter, innovative multi-level-inverter technology is used to realize the required power converting functions with reduced equipment. In this talk the key approaches and methods of the LiBat project will be presented and central results shown. Special focus will be set on the simulative methods used to support the early design and development stages from an overall system perspective. The applied methods can efficiently handle multiple domains and deal with different time and length scales, thus allowing the analysis and optimization of overall- or sub-system behavior. It will be shown how these simulations provide valuable information and insights for the efficient evaluation of concepts. As a result, the construction and iteration of hardware prototypes has been reduced and development cycles shortened.Keywords: electric aircraft, battery, Li-ion, multi-level-inverter, Novec
Procedia PDF Downloads 163751 Confidence Intervals for Process Capability Indices for Autocorrelated Data
Authors: Jane A. Luke
Abstract:
Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes
Procedia PDF Downloads 387750 Assessment of the Impacts of Climate Change on Climatic Zones over the Korean Peninsula for Natural Disaster Management Information
Authors: Sejin Jung, Dongho Kang, Byungsik Kim
Abstract:
Assessing the impact of climate change requires the use of a multi-model ensemble (MME) to quantify uncertainties between scenarios and produce downscaled outlines for simulation of climate under the influence of different factors, including topography. This study decreases climate change scenarios from the 13 global climate models (GCMs) to assess the impacts of future climate change. Unlike South Korea, North Korea lacks in studies using climate change scenarios of the CoupledModelIntercomparisonProject (CMIP5), and only recently did the country start the projection of extreme precipitation episodes. One of the main purposes of this study is to predict changes in the average climatic conditions of North Korea in the future. The result of comparing downscaled climate change scenarios with observation data for a reference period indicates high applicability of the Multi-Model Ensemble (MME). Furthermore, the study classifies climatic zones by applying the Köppen-Geiger climate classification system to the MME, which is validated for future precipitation and temperature. The result suggests that the continental climate (D) that covers the inland area for the reference climate is expected to shift into the temperate climate (C). The coefficient of variation (CVs) in the temperature ensemble is particularly low for the southern coast of the Korean peninsula, and accordingly, a high possibility of the shifting climatic zone of the coast is predicted. This research was supported by a grant (MOIS-DP-2015-05) of Disaster Prediction and Mitigation Technology Development Program funded by Ministry of Interior and Safety (MOIS, Korea).Keywords: MME, North Korea, Koppen–Geiger, climatic zones, coefficient of variation, CV
Procedia PDF Downloads 110749 Performance Evaluation of Routing Protocol in Cognitive Radio with Multi Technological Environment
Authors: M. Yosra, A. Mohamed, T. Sami
Abstract:
Over the past few years, mobile communication technologies have seen significant evolution. This fact promoted the implementation of many systems in a multi-technological setting. From one system to another, the Quality of Service (QoS) provided to mobile consumers gets better. The growing number of normalized standards extends the available services for each consumer, moreover, most of the available radio frequencies have already been allocated, such as 3G, Wifi, Wimax, and LTE. A study by the Federal Communications Commission (FCC) found that certain frequency bands are partially occupied in particular locations and times. So, the idea of Cognitive Radio (CR) is to share the spectrum between a primary user (PU) and a secondary user (SU). The main objective of this spectrum management is to achieve a maximum rate of exploitation of the radio spectrum. In general, the CR can greatly improve the quality of service (QoS) and improve the reliability of the link. The problem will reside in the possibility of proposing a technique to improve the reliability of the wireless link by using the CR with some routing protocols. However, users declared that the links were unreliable and that it was an incompatibility with QoS. In our case, we choose the QoS parameter "bandwidth" to perform a supervised classification. In this paper, we propose a comparative study between some routing protocols, taking into account the variation of different technologies on the existing spectral bandwidth like 3G, WIFI, WIMAX, and LTE. Due to the simulation results, we observe that LTE has significantly higher availability bandwidth compared with other technologies. The performance of the OLSR protocol is better than other on-demand routing protocols (DSR, AODV and DSDV), in LTE technology because of the proper receiving of packets, less packet drop and the throughput. Numerous simulations of routing protocols have been made using simulators such as NS3.Keywords: cognitive radio, multi technology, network simulator (NS3), routing protocol
Procedia PDF Downloads 59748 O-LEACH: The Problem of Orphan Nodes in the LEACH of Routing Protocol for Wireless Sensor Networks
Authors: Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi
Abstract:
The optimum use of coverage in wireless sensor networks (WSNs) is very important. LEACH protocol called Low Energy Adaptive Clustering Hierarchy, presents a hierarchical clustering algorithm for wireless sensor networks. LEACH is a protocol that allows the formation of distributed cluster. In each cluster, LEACH randomly selects some sensor nodes called cluster heads (CHs). The selection of CHs is made with a probabilistic calculation. It is supposed that each non-CH node joins a cluster and becomes a cluster member. Nevertheless, some CHs can be concentrated in a specific part of the network. Thus, several sensor nodes cannot reach any CH. to solve this problem. We created an O-LEACH Orphan nodes protocol, its role is to reduce the sensor nodes which do not belong the cluster. The cluster member called Gateway receives messages from neighboring orphan nodes. The gateway informs CH having the neighboring nodes that not belong to any group. However, Gateway called (CH') attaches the orphaned nodes to the cluster and then collected the data. O-Leach enables the formation of a new method of cluster, leads to a long life and minimal energy consumption. Orphan nodes possess enough energy and seeks to be covered by the network. The principal novel contribution of the proposed work is O-LEACH protocol which provides coverage of the whole network with a minimum number of orphaned nodes and has a very high connectivity rates.As a result, the WSN application receives data from the entire network including orphan nodes. The proper functioning of the Application requires, therefore, management of intelligent resources present within each the network sensor. The simulation results show that O-LEACH performs better than LEACH in terms of coverage, connectivity rate, energy and scalability.Keywords: WSNs; routing; LEACH; O-LEACH; Orphan nodes; sub-cluster; gateway; CH’
Procedia PDF Downloads 370747 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization
Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur
Abstract:
In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization
Procedia PDF Downloads 443746 The Effect of Photovoltaic Integrated Shading Devices on the Energy Performance of Apartment Buildings in a Mediterranean Climate
Authors: Jenan Abu Qadourah
Abstract:
With the depletion of traditional fossil resources and the growing human population, it is now more important than ever to reduce our energy usage and harmful emissions. In the Mediterranean region, the intense solar radiation contributes to summertime overheating, which raises energy costs and building carbon footprints, alternatively making it suitable for the installation of solar energy systems. In urban settings, where multi-story structures predominate and roof space is limited, photovoltaic integrated shading devices (PVSD) are a clean solution for building designers. However, incorporating photovoltaic (PV) systems into a building's envelope is a complex procedure that, if not executed correctly, might result in the PV system failing. As a result, potential PVSD design solutions must be assessed based on their overall energy performance from the project's early design stage. Therefore, this paper aims to investigate and compare the possible impact of various PVSDs on the energy performance of new apartments in the Mediterranean region, with a focus on Amman, Jordan. To achieve the research aim, computer simulations were performed to assess and compare the energy performance of different PVSD configurations. Furthermore, an energy index was developed by taking into account all energy aspects, including the building's primary energy demand and the PVSD systems' net energy production. According to the findings, the PVSD system can meet 12% to 43% of the apartment building's electricity needs. By highlighting the potential interest in PVSD systems, this study aids the building designer in producing more energy-efficient buildings and encourages building owners to install PV systems on the façade of their buildings.Keywords: photovoltaic integrated shading device, solar energy, architecture, energy performance, simulation, overall energy index, Jordan
Procedia PDF Downloads 81745 The Use of the Phytase in Aquaculture, Its Zootechnical Interests and the Possibilities of Incorporation in the Aquafeed
Authors: Niang Mamadou Sileye
Abstract:
The study turns on the use of the phytase in aquaculture, its zootechnical interests and the possibilities of incorporation in the feed. The goal is to reduce the waste in phosphorus linked to the feeding of fishes, without any loss of zootechnical performances and with a decrease of feed costs. We have studied the literature in order to evaluate the raw materials (total phosphorus, phytate and available phosphorus) used by a company to manufacture feed for rainbow trout; to determine the phosphorus requirements for aquaculture species; to determine the requirements of phosphorus for aquaculture species, to determine the sings of lack of phosphorus for fishes; to study the antagonism between the phosphorus and the calcium and to study also the different forms of waste for the rainbow trout. The results found in the bibliography enable us test several Hypothesis of feed formulation for rainbow trout with different raw materials. This simulation and the calculation for wastes allowed to validate two formulation of feed: a control feed (0.5% of monocalcique phosphate) and a trial feed (supplementation with 0.002% of phytase Ronozyme PL and without inorganic phosphate). The feeds have been produced and sent to a experimental structure (agricultural college of Brehoulou).The result of the formulation give a decrease of the phosphorus waste of 28% for the trial feed compared to the feed. The supplementation enables a gain of 2.3 euro per ton. The partial results of the current test show no significant difference yet for the zootechnical parameters (growth rate, mortality, weight gain and obvious conversion rate) between control feed and the trial one. The waste measures do not show either significant difference between the control feed and the trial one, but however, the average difference would to decrease the wastes of 35.6% thanks to the use of phytase.Keywords: phosphorus, phytic acid, phytase, need, digestibility, formulation, food, waste, rainbow trout
Procedia PDF Downloads 96744 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 146743 Exploring 1,2,4-Triazine-3(2H)-One Derivatives as Anticancer Agents for Breast Cancer: A QSAR, Molecular Docking, ADMET, and Molecular Dynamics
Authors: Said Belaaouad
Abstract:
This study aimed to explore the quantitative structure-activity relationship (QSAR) of 1,2,4-Triazine-3(2H)-one derivative as a potential anticancer agent against breast cancer. The electronic descriptors were obtained using the Density Functional Theory (DFT) method, and a multiple linear regression techniques was employed to construct the QSAR model. The model exhibited favorable statistical parameters, including R2=0.849, R2adj=0.656, MSE=0.056, R2test=0.710, and Q2cv=0.542, indicating its reliability. Among the descriptors analyzed, absolute electronegativity (χ), total energy (TE), number of hydrogen bond donors (NHD), water solubility (LogS), and shape coefficient (I) were identified as influential factors. Furthermore, leveraging the validated QSAR model, new derivatives of 1,2,4-Triazine-3(2H)-one were designed, and their activity and pharmacokinetic properties were estimated. Subsequently, molecular docking (MD) and molecular dynamics (MD) simulations were employed to assess the binding affinity of the designed molecules. The Tubulin colchicine binding site, which plays a crucial role in cancer treatment, was chosen as the target protein. Through the simulation trajectory spanning 100 ns, the binding affinity was calculated using the MMPBSA script. As a result, fourteen novel Tubulin-colchicine inhibitors with promising pharmacokinetic characteristics were identified. Overall, this study provides valuable insights into the QSAR of 1,2,4-Triazine-3(2H)-one derivative as potential anticancer agent, along with the design of new compounds and their assessment through molecular docking and dynamics simulations targeting the Tubulin-colchicine binding site.Keywords: QSAR, molecular docking, ADMET, 1, 2, 4-triazin-3(2H)-ones, breast cancer, anticancer, molecular dynamic simulations, MMPBSA calculation
Procedia PDF Downloads 94742 Integration of Hybrid PV-Wind in Three Phase Grid System Using Fuzzy MPPT without Battery Storage for Remote Area
Authors: Thohaku Abdul Hadi, Hadyan Perdana Putra, Nugroho Wicaksono, Adhika Prajna Nandiwardhana, Onang Surya Nugroho, Heri Suryoatmojo, Soedibjo
Abstract:
Access to electricity is now a basic requirement of mankind. Unfortunately, there are still many places around the world which have no access to electricity, such as small islands, where there could potentially be a factory, a plantation, a residential area, or resorts. Many of these places might have substantial potential for energy generation such us Photovoltaic (PV) and Wind turbine (WT), which can be used to generate electricity independently for themselves. Solar energy and wind power are renewable energy sources which are mostly found in nature and also kinds of alternative energy that are still developing in a rapid speed to help and meet the demand of electricity. PV and Wind has a characteristic of power depend on solar irradiation and wind speed based on geographical these areas. This paper presented a control methodology of hybrid small scale PV/Wind energy system that use a fuzzy logic controller (FLC) to extract the maximum power point tracking (MPPT) in different solar irradiation and wind speed. This paper discusses simulation and analysis of the generation process of hybrid resources in MPP and power conditioning unit (PCU) of Photovoltaic (PV) and Wind Turbine (WT) that is connected to the three-phase low voltage electricity grid system (380V) without battery storage. The capacity of the sources used is 2.2 kWp PV and 2.5 kW PMSG (Permanent Magnet Synchronous Generator) -WT power rating. The Modeling of hybrid PV/Wind, as well as integrated power electronics components in grid connected system, are simulated using MATLAB/Simulink.Keywords: fuzzy MPPT, grid connected inverter, photovoltaic (PV), PMSG wind turbine
Procedia PDF Downloads 354741 Study and Simulation of a Dynamic System Using Digital Twin
Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli
Abstract:
Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models
Procedia PDF Downloads 146740 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 408739 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle
Authors: W. Hamaidia, T. Zebbiche
Abstract:
This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error
Procedia PDF Downloads 149738 Analysis of Noise Environment and Acoustics Material in Residential Building
Authors: Heruanda Alviana Giska Barabah, Hilda Rasnia Hapsari
Abstract:
Acoustic phenomena create an acoustic interpretation condition that describes the characteristics of the environment. In urban areas, the tendency of heterogeneous and simultaneous human activity form a soundscape that is different from other regions, one of the characteristics of urban areas that developing the soundscape is the presence of vertical model houses or residential building. Activities both within the building and surrounding environment are able to make the soundscape with certain characteristics. The acoustics comfort of residential building becomes an important aspect, those demand lead the building features become more diverse. Initial steps in mapping acoustic conditions in a soundscape are important, this is the method to determine uncomfortable condition. Noise generated by road traffic, railway, and plane is an important consideration, especially for urban people, therefore the proper design of the building becomes very important as an effort to bring appropriate acoustics comfort. In this paper the authors developed noise mapping on the location of the residential building. Mapping done by taking some point referring to the noise source. The mapping result become the basis for modeling the acoustics wave interacted with the building model. Material selection is done based on literature study and modeling simulation using Insul by considering the absorption coefficient and Sound Transmission Class. The analysis of acoustics rays is ray tracing method using Comsol simulator software that can show the movement of acoustics rays and their interaction with a boundary. The result of this study can be used to consider boundary material in residential building as well as consideration for improving the acoustic quality in the acoustics zones that are formed.Keywords: residential building, noise, absorption coefficient, sound transmission class, ray tracing
Procedia PDF Downloads 245737 High Pressure Torsion Deformation Behavior of a Low-SFE FCC Ternary Medium Entropy Alloy
Authors: Saumya R. Jha, Krishanu Biswas, Nilesh P. Gurao
Abstract:
Several recent investigations have revealed medium entropy alloys exhibiting better mechanical properties than their high entropy counterparts. This clearly establishes that although a higher entropy plays a vital role in stabilization of particular phase over complex intermetallic phases, configurational entropy is not the primary factor responsible for the high inherent strengthening in these systems. Above and beyond a high contribution from friction stresses and solid solution strengthening, strain hardening is an important contributor to the strengthening in these systems. In this regard, researchers have developed severe plastic deformation (SPD) techniques like High Pressure Torsion (HPT) to incorporate very high shear strain in the material, thereby leading to ultrafine grained (UFG) microstructures, which cause manifold increase in the strength. The presented work demonstrates a meticulous study of the variation in mechanical properties at different radial displacements from the center of HPT tested equiatomic ternary FeMnNi synthesized by casting route, which is a low stacking fault energy FCC alloy that shows significantly higher toughness than its high entropy counterparts like Cantor alloy. The gradient in grain sizes along the radial direction of these specimens has been modeled using microstructure entropy for predicting the mechanical properties, which has also been validated by indentation tests. The dislocation density is computed by FEM simulations for varying strains and validated by analyzing synchrotron diffraction data. Thus, the proposed model can be utilized to predict the strengthening behavior of similar systems deformed by HPT subjected to varying loading conditions.Keywords: high pressure torsion, severe plastic deformation, configurational entropy, dislocation density, FEM simulation
Procedia PDF Downloads 152736 Numerical investigation of Hydrodynamic and Parietal Heat Transfer to Bingham Fluid Agitated in a Vessel by Helical Ribbon Impeller
Authors: Mounir Baccar, Amel Gammoudi, Abdelhak Ayadi
Abstract:
The efficient mixing of highly viscous fluids is required for many industries such as food, polymers or paints production. The homogeneity is a challenging operation for this fluids type since they operate at low Reynolds number to reduce the required power of the used impellers. Particularly, close-clearance impellers, mainly helical ribbons, are chosen for highly viscous fluids agitated in laminar regime which is currently heated through vessel wall. Indeed, they are characterized by high shear strains closer to the vessel wall, which causes a disturbing thermal boundary layer and ensures the homogenization of the bulk volume by axial and radial vortices. The hydrodynamic and thermal behaviors of Newtonian fluids in vessels agitated by helical ribbon impellers, has been mostly studied by many researchers. However, rarely researchers investigated numerically the agitation of yield stress fluid by means of helical ribbon impellers. This paper aims to study the effect of the Double Helical Ribbon (DHR) stirrers on both the hydrodynamic and the thermal behaviors of yield stress fluids treated in a cylindrical vessel by means of numerical simulation approach. For this purpose, continuity, momentum, and thermal equations were solved by means of 3D finite volume technique. The effect of Oldroyd (Od) and Reynolds (Re) numbers on the power (Po) and Nusselt (Nu) numbers for the mentioned stirrer type have been studied. Also, the velocity and thermal fields, the dissipation function and the apparent viscosity have been presented in different (r-z) and (r-θ) planes.Keywords: Bingham fluid, Hydrodynamic and thermal behavior, helical ribbon, mixing, numerical modelling
Procedia PDF Downloads 303735 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform
Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier
Abstract:
The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing
Procedia PDF Downloads 194734 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example
Authors: Alena Nesterenko, Svetlana Petrikova
Abstract:
Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation
Procedia PDF Downloads 204733 Real-Time Radar Tracking Based on Nonlinear Kalman Filter
Authors: Milca F. Coelho, K. Bousson, Kawser Ahmed
Abstract:
To accurately track an aerospace vehicle in a time-critical situation and in a highly nonlinear environment, is one of the strongest interests within the aerospace community. The tracking is achieved by estimating accurately the state of a moving target, which is composed of a set of variables that can provide a complete status of the system at a given time. One of the main ingredients for a good estimation performance is the use of efficient estimation algorithms. A well-known framework is the Kalman filtering methods, designed for prediction and estimation problems. The success of the Kalman Filter (KF) in engineering applications is mostly due to the Extended Kalman Filter (EKF), which is based on local linearization. Besides its popularity, the EKF presents several limitations. To address these limitations and as a possible solution to tracking problems, this paper proposes the use of the Ensemble Kalman Filter (EnKF). Although the EnKF is being extensively used in the context of weather forecasting and it is being recognized for producing accurate and computationally effective estimation on systems with a very high dimension, it is almost unknown by the tracking community. The EnKF was initially proposed as an attempt to improve the error covariance calculation, which on the classic Kalman Filter is difficult to implement. Also, in the EnKF method the prediction and analysis error covariances have ensemble representations. These ensembles have sizes which limit the number of degrees of freedom, in a way that the filter error covariance calculations are a lot more practical for modest ensemble sizes. In this paper, a realistic simulation of a radar tracking was performed, where the EnKF was applied and compared with the Extended Kalman Filter. The results suggested that the EnKF is a promising tool for tracking applications, offering more advantages in terms of performance.Keywords: Kalman filter, nonlinear state estimation, optimal tracking, stochastic environment
Procedia PDF Downloads 144