Search results for: PSCAD simulation
1167 Optimal Dynamic Regime for CO Oxidation Reaction Discovered by Policy-Gradient Reinforcement Learning Algorithm
Authors: Lifar M. S., Tereshchenko A. A., Bulgakov A. N., Guda S. A., Guda A. A., Soldatov A. V.
Abstract:
Metal nanoparticles are widely used as heterogeneous catalysts to activate adsorbed molecules and reduce the energy barrier of the reaction. Reaction product yield depends on the interplay between elementary processes - adsorption, activation, reaction, and desorption. These processes, in turn, depend on the inlet feed concentrations, temperature, and pressure. At stationary conditions, the active surface sites may be poisoned by reaction byproducts or blocked by thermodynamically adsorbed gaseous reagents. Thus, the yield of reaction products can significantly drop. On the contrary, the dynamic control accounts for the changes in the surface properties and adjusts reaction parameters accordingly. Therefore dynamic control may be more efficient than stationary control. In this work, a reinforcement learning algorithm has been applied to control the simulation of CO oxidation on a catalyst. The policy gradient algorithm is learned to maximize the CO₂ production rate based on the CO and O₂ flows at a given time step. Nonstationary solutions were found for the regime with surface deactivation. The maximal product yield was achieved for periodic variations of the gas flows, ensuring a balance between available adsorption sites and the concentration of activated intermediates. This methodology opens a perspective for the optimization of catalytic reactions under nonstationary conditions.Keywords: artificial intelligence, catalyst, co oxidation, reinforcement learning, dynamic control
Procedia PDF Downloads 1291166 2 Stage CMOS Regulated Cascode Distributed Amplifier Design Based On Inductive Coupling Technique in Submicron CMOS Process
Authors: Kittipong Tripetch, Nobuhiko Nakano
Abstract:
This paper proposes one stage and two stage CMOS Complementary Regulated Cascode Distributed Amplifier (CRCDA) design based on Inductive and Transformer coupling techniques. Usually, Distributed amplifier is based on inductor coupling between gate and gate of MOSFET and between drain and drain of MOSFET. But this paper propose some new idea, by coupling with differential primary windings of transformer between gate and gate of MOSFET first stage and second stage of regulated cascade amplifier and by coupling with differential secondary windings transformer of MOSFET between drain and drain of MOSFET first stage and second stage of regulated cascade amplifier. This paper also proposes polynomial modeling of Silicon Transformer passive equivalent circuit from Nanyang Technological University which is used to extract frequency response of transformer. Cadence simulation results are used to verify validity of transformer polynomial modeling which can be used to design distributed amplifier without Cadence. 4 parameters of scattering matrix of 2 port of the propose circuit is derived as a function of 4 parameters of impedance matrix.Keywords: CMOS regulated cascode distributed amplifier, silicon transformer modeling with polynomial, low power consumption, distribute amplification technique
Procedia PDF Downloads 5121165 Development and Validation of Thermal Stability in Complex System ABDM has two ASIC by NISA and COMSOL Tools
Authors: A. Oukaira, A. Lakhssassi, O. Ettahri
Abstract:
To make a good thermal management in an ABDM (Adapter Board Detector Module) card, we must first control temperature and its gradient from the first step in the design of integrated circuits ASIC of our complex system. In this paper, our main goal is to develop and validate the thermal stability in order to get an idea of the flow of heat around the ASIC in transient and thus address the thermal issues for integrated circuits at the ABDM card. However, we need heat sources simulations for ABDM card to establish its thermal mapping. This led us to perform simulations at each ASIC that will allow us to understand the thermal ABDM map and find real solutions for each one of our complex system that contains 36 ABDM map, taking into account the different layers around ASIC. To do a transient simulation under NISA, we had to build a function of power modulation in time TIMEAMP. The maximum power generated in the ASIC is 0.6 W. We divided the power uniformly in the volume of the ASIC. This power was applied for 5 seconds to visualize the evolution and distribution of heat around the ASIC. The DBC (Dirichlet Boundary conditions) method was applied around the ABDM at 25°C and just after these simulations in NISA tool we will validate them by COMSOL tool, wich is a numerical calculation software for a modular finite element for modeling a wide variety of physical phenomena characterizing a real problem. It will also be a design tool with its ability to handle 3D geometries for complex systems.Keywords: ABDM, APD, thermal mapping, complex system
Procedia PDF Downloads 2641164 3D Hybrid Multiphysics Lattice Boltzmann Model for Studying the Flow Behavior of Emulsions in Structured Rectangular Microchannels
Authors: Luma Al-Tamimi, Hassan Farhat, Wessam Hasan
Abstract:
A three-dimensional (3D) hybrid quasi-steady thermal lattice Boltzmann model is developed to couple the effects of surfactant, temperature, interfacial tension, and contact angle. This 3D model is an extended scheme of a previously introduced two-dimensional (2D) hybrid lattice Boltzmann model. The 3D model is used to study the combined multi-physics effects on emulsion systems flowing in rectangular microchannels with and without confinements, where the suspended phase is made of droplets, plugs, or a mixture of both. The simulation results show that emulsion systems with plugs as the suspended phase are more efficient than with droplets, whereas mixed systems that form large plugs through coalescence have even greater efficiency. The 3D contact angle model generates matching results to those of the 2D model, which were validated with experiments. Furthermore, the effects of various confinements on adhering single drop systems are investigated for delineating their influence on the power required for transporting the suspended phase through the channel. It is shown that the deeper the constriction is, the lower the system efficiency. Increasing the surfactant concentration or fluid temperature in a channel with confinement carries a substantial positive effect on oil droplet transportation.Keywords: lattice Boltzmann method, thermal, contact angle, surfactants, high viscosity ratio, porous media
Procedia PDF Downloads 1751163 Numerical Determination of Transition of Cup Height between Hydroforming Processes
Authors: H. Selcuk Halkacı, Mevlüt Türköz, Ekrem Öztürk, Murat Dilmec
Abstract:
Various attempts concerning the low formability issue for lightweight materials like aluminium and magnesium alloys are being investigated in many studies. Advanced forming processes such as hydroforming is one of these attempts. In last decades sheet hydroforming process has an increasing interest, particularly in the automotive and aerospace industries. This process has many advantages such as enhanced formability, the capability to form complex parts, higher dimensional accuracy and surface quality, reduction of tool costs and reduced die wear compared to the conventional sheet metal forming processes. There are two types of sheet hydroforming. One of them is hydromechanical deep drawing (HDD) that is a special drawing process in which pressurized fluid medium is used instead of one of the die half compared to the conventional deep drawing (CDD) process. Another one is sheet hydroforming with die (SHF-D) in which blank is formed with the act of fluid pressure and it takes the shape of die half. In this study, transition of cup height according to cup diameter between the processes was determined by performing simulation of the processes in Finite Element Analysis. Firstly SHF-D process was simulated for 40 mm cup diameter at different cup heights chancing from 10 mm to 30 mm and the cup height to diameter ratio value in which it is not possible to obtain a successful forming was determined. Then the same ratio was checked for a different cup diameter of 60 mm. Then thickness distributions of the cups formed by SHF-D and HDD processes were compared for the cup heights. Consequently, it was found that the thickness distribution in HDD process in the analyses was more uniform.Keywords: finite element analysis, HDD, hydroforming sheet metal forming, SHF-D
Procedia PDF Downloads 4291162 Optimal Geothermal Borehole Design Guided By Dynamic Modeling
Authors: Hongshan Guo
Abstract:
Ground-source heat pumps provide stable and reliable heating and cooling when designed properly. The confounding effect of the borehole depth for a GSHP system, however, is rarely taken into account for any optimization: the determination of the borehole depth usually comes prior to the selection of corresponding system components and thereafter any optimization of the GSHP system. The depth of the borehole is important to any GSHP system because the shallower the borehole, the larger the fluctuation of temperature of the near-borehole soil temperature. This could lead to fluctuations of the coefficient of performance (COP) for the GSHP system in the long term when the heating/cooling demand is large. Yet the deeper the boreholes are drilled, the more the drilling cost and the operational expenses for the circulation. A controller that reads different building load profiles, optimizing for the smallest costs and temperature fluctuation at the borehole wall, eventually providing borehole depth as the output is developed. Due to the nature of the nonlinear dynamic nature of the GSHP system, it was found that between conventional optimal controller problem and model predictive control problem, the latter was found to be more feasible due to a possible history of both the trajectory during the iteration as well as the final output could be computed and compared against. Aside from a few scenarios of different weighting factors, the resulting system costs were verified with literature and reports and were found to be relatively accurate, while the temperature fluctuation at the borehole wall was also found to be within acceptable range. It was therefore determined that the MPC is adequate to optimize for the investment as well as the system performance for various outputs.Keywords: geothermal borehole, MPC, dynamic modeling, simulation
Procedia PDF Downloads 2871161 Power Allocation Algorithm for Orthogonal Frequency Division Multiplexing Based Cognitive Radio Networks
Authors: Bircan Demiral
Abstract:
Cognitive radio (CR) is the promising technology that addresses the spectrum scarcity problem for future wireless communications. Orthogonal Frequency Division Multiplexing (OFDM) technology provides more power band ratios for cognitive radio networks (CRNs). While CR is a solution to the spectrum scarcity, it also brings up the capacity problem. In this paper, a novel power allocation algorithm that aims at maximizing the sum capacity in the OFDM based cognitive radio networks is proposed. Proposed allocation algorithm is based on the previously developed water-filling algorithm. To reduce the computational complexity calculating in water filling algorithm, proposed algorithm allocates the total power according to each subcarrier. The power allocated to the subcarriers increases sum capacity. To see this increase, Matlab program was used, and the proposed power allocation was compared with average power allocation, water filling and general power allocation algorithms. The water filling algorithm performed worse than the proposed algorithm while it performed better than the other two algorithms. The proposed algorithm is better than other algorithms in terms of capacity increase. In addition the effect of the change in the number of subcarriers on capacity was discussed. Simulation results show that the increase in the number of subcarrier increases the capacity.Keywords: cognitive radio network, OFDM, power allocation, water filling
Procedia PDF Downloads 1371160 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach
Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan
Abstract:
Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach
Procedia PDF Downloads 4081159 Discrete Element Modeling of the Effect of Particle Shape on Creep Behavior of Rockfills
Authors: Yunjia Wang, Zhihong Zhao, Erxiang Song
Abstract:
Rockfills are widely used in civil engineering, such as dams, railways, and airport foundations in mountain areas. A significant long-term post-construction settlement may affect the serviceability or even the safety of rockfill infrastructures. The creep behavior of rockfills is influenced by a number of factors, such as particle size, strength and shape, water condition and stress level. However, the effect of particle shape on rockfill creep still remains poorly understood, which deserves a careful investigation. Particle-based discrete element method (DEM) was used to simulate the creep behavior of rockfills under different boundary conditions. Both angular and rounded particles were considered in this numerical study, in order to investigate the influence of particle shape. The preliminary results showed that angular particles experience more breakages and larger creep strains under one-dimensional compression than rounded particles. On the contrary, larger creep strains were observed in he rounded specimens in the direct shear test. The mechanism responsible for this difference is that the possibility of the existence of key particle in rounded particles is higher than that in angular particles. The above simulations demonstrate that the influence of particle shape on the creep behavior of rockfills can be simulated by DEM properly. The method of DEM simulation may facilitate our understanding of deformation properties of rockfill materials.Keywords: rockfills, creep behavior, particle crushing, discrete element method, boundary conditions
Procedia PDF Downloads 3131158 Analyzing Façade Scenarios and Daylight Levels in the Reid Building: A Reflective Case Study on the Designed Daylight under Overcast Sky
Authors: Eman Mayah, Raid Hanna
Abstract:
This study presents the use of daylight in the case study of the Reid building at the Glasgow School of Art in the city of Glasgow, UK. In Nordic countries, daylight is one of the main considerations within building design, especially in the face of long, lightless winters. A shortage of daylight, contributing to dark and gloomy conditions, necessitates that designs incorporate strong daylight performance. As such, the building in question is designed to capture natural light for varying needs, where studios are located on the North and South façades. The study’s approach presents an analysis of different façade scenarios, where daylight from the North is observed, analyzed and compared with the daylight from the South façade for various design studios in the building. The findings then are correlated with the results of daylight levels from the daylight simulation program (Autodesk Ecotect Analysis) for the investigated studios. The study finds there to be a dramatic difference in daylight nature and levels between the North and South façades, where orientation, obstructions and designed façade fenestrations have major effects on the findings. The study concludes that some of the studios positioned on the North façade do not have a desirable quality of diffused northern light, due to the outside building’s obstructions, area and volume of the studio and the shadow effect of the designed mezzanine floor in the studios.Keywords: daylight levels, educational building, Façade fenestration, overcast weather
Procedia PDF Downloads 4041157 Dynamic Analysis of a Moderately Thick Plate on Pasternak Type Foundation under Impact and Moving Loads
Authors: Neslihan Genckal, Reha Gursoy, Vedat Z. Dogan
Abstract:
In this study, dynamic responses of composite plates on elastic foundations subjected to impact and moving loads are investigated. The first order shear deformation (FSDT) theory is used for moderately thick plates. Pasternak-type (two-parameter) elastic foundation is assumed. Elastic foundation effects are integrated into the governing equations. It is assumed that plate is first hit by a mass as an impact type loading then the mass continues to move on the composite plate as a distributed moving loading, which resembles the aircraft landing on airport pavements. Impact and moving loadings are modeled by a mass-spring-damper system with a wheel. The wheel is assumed to be continuously in contact with the plate after impact. The governing partial differential equations of motion for displacements are converted into the ordinary differential equations in the time domain by using Galerkin’s method. Then, these sets of equations are solved by using the Runge-Kutta method. Several parameters such as vertical and horizontal velocities of the aircraft, volume fractions of the steel rebar in the reinforced concrete layer, and the different touchdown locations of the aircraft tire on the runway are considered in the numerical simulation. The results are compared with those of the ABAQUS, which is a commercial finite element code.Keywords: elastic foundation, impact, moving load, thick plate
Procedia PDF Downloads 3131156 Intelligent Agent-Based Model for the 5G mmWave O2I Technology Adoption
Authors: Robert Joseph M. Licup
Abstract:
The deployment of the fifth-generation (5G) mobile system through mmWave frequencies is the new solution in the requirement to provide higher bandwidth readily available for all users. The usage pattern of the mobile users has moved towards either the work from home or online classes set-up because of the pandemic. Previous mobile technologies can no longer meet the high speed, and bandwidth requirement needed, given the drastic shift of transactions to the home. The millimeter-wave (mmWave) underutilized frequency is utilized by the fifth-generation (5G) cellular networks that support multi-gigabit-per-second (Gbps) transmission. However, due to its short wavelengths, high path loss, directivity, blockage sensitivity, and narrow beamwidth are some of the technical challenges that need to be addressed. Different tools, technologies, and scenarios are explored to support network design, accurate channel modeling, implementation, and deployment effectively. However, there is a big challenge on how the consumer will adopt this solution and maximize the benefits offered by the 5G Technology. This research proposes to study the intricacies of technology diffusion, individual attitude, behaviors, and how technology adoption will be attained. The agent based simulation model shaped by the actual applications, technology solution, and related literature was used to arrive at a computational model. The research examines the different attributes, factors, and intricacies that can affect each identified agent towards technology adoption.Keywords: agent-based model, AnyLogic, 5G O21, 5G mmWave solutions, technology adoption
Procedia PDF Downloads 1081155 Comparative Analysis of Single Versus Multi-IRS Assisted Multi-User Wireless Communication System
Authors: Ayalew Tadese Kibret, Belayneh Sisay Alemu, Amare Kassaw Yimer
Abstract:
Intelligent reflecting surfaces (IRSs) are considered to be a key enabling technology for sixth-generation (6G) wireless networks. IRSs are electromagnetic (EM) surfaces that are fabricated and have integrated electronics, electronically controlled processes, and particularly wireless communication features. IRSs operate without the need for complex signal processing and the encoding and decoding steps that improve the signal quality at the receiver. Improving vital performance parameters such as energy efficiency (EE) and spectral efficiency (SE) have frequently been the primary goals of research in order to meet the increasing requirements for advanced services in the future 6G communications. In this research, we conduct a comparative analysis on single and multi-IRS wireless communication networks using energy and spectrum efficiency. The energy efficiency versus user distance, energy efficiency versus signal to noise ratio, and spectral efficiency versus user distance are the basis for our result with 1, 2, 4, and 6 IRSs. According to the results of our simulation, in terms of energy and spectral efficiency, six IRS perform better than four, two, and single IRS. Overall, our results suggest that multi-IRS-assisted wireless communication systems outperform single IRS systems in terms of communication performance.Keywords: sixth-generation (6G), wireless networks, intelligent reflecting surfaces, energy efficiency, spectral efficiency
Procedia PDF Downloads 251154 Early Warning System of Financial Distress Based On Credit Cycle Index
Authors: Bi-Huei Tsai
Abstract:
Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightly-distressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models, are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the two-stage model incorporating financial ratios, corporate governance and market factors has the lowest misclassification error rate. The two-stage model is more accurate than the one-stage model as its distressed cut-off indicators are adjusted according to the macroeconomic-based credit cycle index.Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy
Procedia PDF Downloads 3771153 Understanding the Complexities of Consumer Financial Spinning
Authors: Olivier Mesly
Abstract:
This research presents a conceptual framework termed “Consumer Financial Spinning” (CFS) to analyze consumer behavior in the financial/economic markets. This phenomenon occurs when consumers of high-stakes financial products accumulate unsustainable debt, leading them to detach from their initial financial hierarchy of needs, wealth-related goals, and preferences regarding their household portfolio of assets. The daring actions of these consumers, forming a dark financial triangle, are characterized by three behaviors: overconfidence, the use of rationed rationality, and deceitfulness. We show that we can incorporate CFS into the traditional CAPM and Markovitz’ portfolio optimization models to create a framework that explains such market phenomena as the global financial crisis, highlighting the antecedents and consequences of ill-conceived speculation. Because this is a conceptual paper, there is no methodology with respect to ground studies. However, we apply modeling principles derived from the data percolation methodology, which contains tenets explicating how to structure concepts. A simulation test of the proposed framework is conducted; it demonstrates the conditions under which the relationship between expected returns and risk may deviate from linearity. The analysis and conceptual findings are particularly relevant both theoretically and pragmatically as they shed light on the psychological conditions that drive intense speculation, which can lead to market turmoil. Armed with such understanding, regulators are better equipped to propose solutions before the economic problems become out of control.Keywords: consumer financial spinning, rationality, deceitfulness, overconfidence, CAPM
Procedia PDF Downloads 481152 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm
Procedia PDF Downloads 3131151 Encoded Fiber Optic Sensors for Simultaneous Multipoint Sensing
Authors: C. Babu Rao, Pandian Chelliah
Abstract:
Owing to their reliability, a number of fluorescent spectra based fiber optic sensors have been developed for detection and identification of hazardous chemicals such as explosives, narcotics etc. In High security regions, such as airports, it is important to monitor simultaneously multiple locations. This calls for deployment of a portable sensor at each location. However, the selectivity and sensitivity of these techniques depends on the spectral resolution of the spectral analyzer. The better the resolution the larger the repertoire of chemicals that can be detected. A portable unit will have limitations in meeting these requirements. Optical fibers can be employed for collecting and transmitting spectral signal from the portable sensor head to a sensitive central spectral analyzer (CSA). For multipoint sensing, optical multiplexing of multiple sensor heads with CSA has to be adopted. However with multiplexing, when one sensor head is connected to CSA, the rest may remain unconnected for the turn-around period. The larger the number of sensor heads the larger this turn-around time will be. To circumvent this imitation, we propose in this paper, an optical encoding methodology to use multiple portable sensor heads connected to a single CSA. Each portable sensor head is assigned an unique address. Spectra of every chemical detected through this sensor head, are encoded by its unique address and can be identified at the CSA end. The methodology proposed is demonstrated through a simulation using Matlab SIMULINK.Keywords: optical encoding, fluorescence, multipoint sensing
Procedia PDF Downloads 7101150 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM
Authors: Clement Leroy, Guillaume Boitel
Abstract:
This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery
Procedia PDF Downloads 2041149 Swastika Shape Multiband Patch Antenna for Wireless Applications on Low Cost Substrate
Authors: Md. Samsuzzaman, M. T. Islam, J. S. Mandeep, N. Misran
Abstract:
In this article, a compact simple structure modified Swastika shape patch multiband antenna on a substrate of available low cost polymer resin composite material is designed for Wi-Fi and WiMAX applications. The substrate material consists of an epoxy matrix reinforced by woven glass. The designed micro-strip line fed compact antenna comprises of a planar wide square slot ground with four slits and Swastika shape radiation patch with a rectangular slot. The effect of the different substrate materials on the reflection coefficients of the proposed antennas was also analyzed. It can be clearly seen that the proposed antenna provides a wider bandwidth and acceptable return loss value compared to other reported materials. The simulation results exhibits that the antenna has an impedance bandwidth with -10 dB return loss at 3.01-3.89 GHz and 4.88-6.10 GHz which can cover both the WLAN, WiMAX and public safety WLAN bands. The proposed swastika shape antenna was designed and analyzed by using a finite element method based simulator HFSS and designed on a low cost FR4 (polymer resin composite material) printed circuit board. The electrical performances and superior frequency characteristics make the proposed material antenna desirable for wireless communications.Keywords: epoxy resin polymer, multiband, swastika shaped, wide slot, WLAN/WiMAX
Procedia PDF Downloads 4521148 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine
Authors: Anas Rao, Hao Duan, Fanhua Ma
Abstract:
The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet
Procedia PDF Downloads 2511147 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights
Authors: Nelson Bii, Christopher Ouma, John Odhiambo
Abstract:
Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths
Procedia PDF Downloads 1391146 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder
Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen
Abstract:
Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.Keywords: count data, meta-analytic prior, negative binomial, poisson
Procedia PDF Downloads 1171145 Bioeconomic Modelling for Barramundi (Lates calcarifer) in Queensland: Implications for Recreational Fishing Following Recent Gill Netting Closures
Authors: Sabiha S. Marine, Nicole Flint, John Rolfe
Abstract:
The Queensland state government introduced commercial gill net fishing closures in Cairns, Mackay, and Rockhampton in November 2015 to increase the recreational fishing opportunities, nature-based tourism, and economic benefits in these three regional areas. This management change is likely to improve the potential for more desirable stock structures through natural recruitment. Barramundi (Lates calcarifer) is one of the popular target fish for recreational and commercial fishers in Northern Australia. This investigation examines the effects of reduced commercial fishing from both biological and economic perspectives, particularly on the local Barramundi population of the Fitzroy River in Rockhampton, the largest river catchment flowing to the eastern coast of Australia. Data on different parameters of biological and economic aspects have been collated from secondary sources for analysis through a system simulation approach to identify the effectiveness of the commercial netting closures on recreational fishing effort, especially for the Barramundi population. The results have the potential to explain certain consequences of the netting closures in Queensland, which could serve to inform future fisheries management decisions. The study output as a whole will help in the better management of fisheries resources by evaluating recreational fishing opportunities in Queensland, where the potential for increases in recreation is high.Keywords: Barramundi, bioeconomic model, fishery management, recreational fishing
Procedia PDF Downloads 1661144 A Micro-Scale of Electromechanical System Micro-Sensor Resonator Based on UNO-Microcontroller for Low Magnetic Field Detection
Authors: Waddah Abdelbagi Talha, Mohammed Abdullah Elmaleeh, John Ojur Dennis
Abstract:
This paper focuses on the simulation and implementation of a resonator micro-sensor for low magnetic field sensing based on a U-shaped cantilever and piezoresistive configuration, which works based on Lorentz force physical phenomena. The resonance frequency is an important parameter that depends upon the highest response and sensitivity through the frequency domain (frequency response) of any vibrated micro-scale of an electromechanical system (MEMS) device. And it is important to determine the direction of the detected magnetic field. The deflection of the cantilever is considered for vibrated mode with different frequencies in the range of (0 Hz to 7000 Hz); for the purpose of observing the frequency response. A simple electronic circuit-based polysilicon piezoresistors in Wheatstone's bridge configuration are used to transduce the response of the cantilever to electrical measurements at various voltages. Microcontroller-based Arduino program and PROTEUS electronic software are used to analyze the output signals from the sensor. The highest output voltage amplitude of about 4.7 mV is spotted at about 3 kHz of the frequency domain, indicating the highest sensitivity, which can be called resonant sensitivity. Based on the resonant frequency value, the mode of vibration is determined (up-down vibration), and based on that, the vector of the magnetic field is also determined.Keywords: resonant frequency, sensitivity, Wheatstone bridge, UNO-microcontroller
Procedia PDF Downloads 1271143 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 4421142 Investigating the Shear Behaviour of Fouled Ballast Using Discrete Element Modelling
Authors: Ngoc Trung Ngo, Buddhima Indraratna, Cholachat Rujikiathmakjornr
Abstract:
For several hundred years, the design of railway tracks has practically remained unchanged. Traditionally, rail tracks are placed on a ballast layer due to several reasons, including economy, rapid drainage, and high load bearing capacity. The primary function of ballast is to distributing dynamic track loads to sub-ballast and subgrade layers, while also providing lateral resistance and allowing for rapid drainage. Upon repeated trainloads, the ballast becomes fouled due to ballast degradation and the intrusion of fines which adversely affects the strength and deformation behaviour of ballast. This paper presents the use of three-dimensional discrete element method (DEM) in studying the shear behaviour of the fouled ballast subjected to direct shear loading. Irregularly shaped particles of ballast were modelled by grouping many spherical balls together in appropriate sizes to simulate representative ballast aggregates. Fouled ballast was modelled by injecting a specified number of miniature spherical particles into the void spaces. The DEM simulation highlights that the peak shear stress of the ballast assembly decreases and the dilation of fouled ballast increases with an increase level of fouling. Additionally, the distributions of contact force chain and particle displacement vectors were captured during shearing progress, explaining the formation of shear band and the evolutions of volumetric change of fouled ballast.Keywords: railway ballast, coal fouling, discrete element modelling, discrete element method
Procedia PDF Downloads 4511141 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches
Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.
Abstract:
A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency
Procedia PDF Downloads 1471140 Numerical Investigation of the Bio-fouling Roughness Effect on Tidal Turbine
Authors: O. Afshar
Abstract:
Unlike other renewable energy sources, tidal current energy is an extremely reliable, predictable and continuous energy source as the current pattern and speed can be predicted throughout the year. A key concern associated with tidal turbines is their long-term reliability when operating in the hostile marine environment. Bio-fouling changes the physical shape and roughness of turbine components, hence altering the overall turbine performance. This paper seeks to employ Computational Fluid Dynamics (CFD) method to quantify the effects of this problem based on the obtained flow field information. The simulation is carried out on a NACA 63-618 aerofoil. The Reynolds Averaged Navier-Stokes (RANS) equations with Shear Stress Transport (SST) turbulent model are used to simulate the flow around the model. Different levels of fouling are studied on 2D aerofoil surface with quantified fouling height and density. In terms of lift and drag coefficient results, numerical results show good agreement with the experiment which was carried out in wind tunnel. Numerical results of research indicate that an increase in fouling thickness causes an increase in drag coefficient and a reduction in lift coefficient. Moreover, pressure gradient gradually becomes adverse as height of fouling increases. In addition, result by turbulent kinetic energy contour reveals it increases with fouling height and it extends into wake due to flow separation.Keywords: tidal energy, lift coefficient, drag coefficient, roughness
Procedia PDF Downloads 3821139 Evaluation of Deformation for Deep Excavations in the Greater Vancouver Area Through Case Studies
Authors: Boris Kolev, Matt Kokan, Mohammad Deriszadeh, Farshid Bateni
Abstract:
Due to the increasing demand for real estate and the need for efficient land utilization in Greater Vancouver, developers have been increasingly considering the construction of high-rise structures with multiple below-grade parking. The temporary excavations required to allow for the construction of underground levels have recently reached up to 40 meters in depth. One of the challenges with deep excavations is the prediction of wall displacements and ground settlements due to their effect on the integrity of City utilities, infrastructure, and adjacent buildings. A large database of survey monitoring data has been collected for deep excavations in various soil conditions and shoring systems. The majority of the data collected is for tie-back anchors and shotcrete lagging systems. The data were categorized, analyzed and the results were evaluated to find a relationship between the most dominant parameters controlling the displacement, such as depth of excavation, soil properties, and the tie-back anchor loading and arrangement. For a select number of deep excavations, finite element modeling was considered for analyses. The lateral displacements from the simulation results were compared to the recorded survey monitoring data. The study concludes with a discussion and comparison of the available empirical and numerical modeling methodologies for evaluating lateral displacements in deep excavations.Keywords: deep excavations, lateral displacements, numerical modeling, shoring walls, tieback anchors
Procedia PDF Downloads 1811138 Analysis of Nonlinear Dynamic Systems Excited by Combined Colored and White Noise Excitations
Authors: Siu-Siu Guo, Qingxuan Shi
Abstract:
In this paper, single-degree-of-freedom (SDOF) systems to white noise and colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis.Keywords: filtered noise, narrow-banded noise, nonlinear dynamic, random vibration
Procedia PDF Downloads 225