Search results for: standard deviation (CSD)
3553 Model Predictive Control of Turbocharged Diesel Engine with Exhaust Gas Recirculation
Authors: U. Yavas, M. Gokasan
Abstract:
Control of diesel engine’s air path has drawn a lot of attention due to its multi input-multi output, closed coupled, non-linear relation. Today, precise control of amount of air to be combusted is a must in order to meet with tight emission limits and performance targets. In this study, passenger car size diesel engine is modeled by AVL Boost RT, and then simulated with standard, industry level PID controllers. Finally, linear model predictive control is designed and simulated. This study shows the importance of modeling and control of diesel engines with flexible algorithm development in computer based systems.Keywords: predictive control, engine control, engine modeling, PID control, feedforward compensation
Procedia PDF Downloads 6363552 Constraints on IRS Control: An Alternative Approach to Tax Gap Analysis
Authors: J. T. Manhire
Abstract:
A tax authority wants to take actions it knows will foster the greatest degree of voluntary taxpayer compliance to reduce the “tax gap.” This paper suggests that even if a tax authority could attain a state of complete knowledge, there are constraints on whether and to what extent such actions would result in reducing the macro-level tax gap. These limits are not merely a consequence of finite agency resources. They are inherent in the system itself. To show that this is one possible interpretation of the tax gap data, the paper formulates known results in a different way by analyzing tax compliance as a population with a single covariate. This leads to a standard use of the logistic map to analyze the dynamics of non-compliance growth or decay over a sequence of periods. This formulation gives the same results as the tax gap studies performed over the past fifty years in the U.S. given the published margins of error. Limitations and recommendations for future work are discussed, along with some implications for tax policy.Keywords: income tax, logistic map, tax compliance, tax law
Procedia PDF Downloads 1203551 Numerical Analysis of the Turbulent Flow around DTMB 4119 Marine Propeller
Authors: K. Boumediene, S. E. Belhenniche
Abstract:
This article presents a numerical analysis of a turbulent flow past DTMB 4119 marine propeller by the means of RANS approach; the propeller designed at David Taylor Model Basin in USA. The purpose of this study is to predict the hydrodynamic performance of the marine propeller, it aims also to compare the results obtained with the experiment carried out in open water tests; a periodical computational domain was created to reduce the unstructured mesh size generated. The standard kw turbulence model for the simulation is selected; the results were in a good agreement. Therefore, the errors were estimated respectively to 1.3% and 5.9% for KT and KQ.Keywords: propeller flow, CFD simulation, RANS, hydrodynamic performance
Procedia PDF Downloads 4993550 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow
Authors: Alex Fedoseyev
Abstract:
This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow
Procedia PDF Downloads 613549 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions
Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka
Abstract:
In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).Keywords: cytotoxicity, graphene oxide, monolayer, spheroid
Procedia PDF Downloads 1253548 Enhancing the Dynamic Performance of Grid-Tied Inverters Using Manta Ray Foraging Algorithm
Authors: H. E. Keshta, A. A. Ali
Abstract:
Three phase grid-tied inverters are widely employed in micro-grids (MGs) as interphase between DC and AC systems. These inverters are usually controlled through standard decoupled d–q vector control strategy based on proportional integral (PI) controllers. Recently, advanced meta-heuristic optimization techniques have been used instead of deterministic methods to obtain optimum PI controller parameters. This paper provides a comparative study between the performance of the global Porcellio Scaber algorithm (GPSA) based PI controller and Manta Ray foraging optimization (MRFO) based PI controller.Keywords: micro-grids, optimization techniques, grid-tied inverter control, PI controller
Procedia PDF Downloads 1323547 A Knowledge-Based Development of Risk Management Approaches for Construction Projects
Authors: Masoud Ghahvechi Pour
Abstract:
Risk management is a systematic and regular process of identifying, analyzing and responding to risks throughout the project's life cycle in order to achieve the optimal level of elimination, reduction or control of risk. The purpose of project risk management is to increase the probability and effect of positive events and reduce the probability and effect of unpleasant events on the project. Risk management is one of the most fundamental parts of project management, so that unmanaged or untransmitted risks can be one of the primary factors of failure in a project. Effective risk management does not apply to risk regression, which is apparently the cheapest option of the activity. However, the main problem with this option is the economic sensitivity, because what is potentially profitable is by definition risky, and what does not pose a risk is economically interesting and does not bring tangible benefits. Therefore, in relation to the implemented project, effective risk management is finding a "middle ground" in its management, which includes, on the one hand, protection against risk from a negative direction by means of accurate identification and classification of risk, which leads to analysis And it becomes a comprehensive analysis. On the other hand, management using all mathematical and analytical tools should be based on checking the maximum benefits of these decisions. Detailed analysis, taking into account all aspects of the company, including stakeholder analysis, will allow us to add what will become tangible benefits for our project in the future to effective risk management. Identifying the risk of the project is based on the theory that which type of risk may affect the project, and also refers to specific parameters and estimating the probability of their occurrence in the project. These conditions can be divided into three groups: certainty, uncertainty, and risk, which in turn support three types of investment: risk preference, risk neutrality, specific risk deviation, and its measurement. The result of risk identification and project analysis is a list of events that indicate the cause and probability of an event, and a final assessment of its impact on the environment.Keywords: risk, management, knowledge, risk management
Procedia PDF Downloads 663546 Predicting Stem Borer Density in Maize Using RapidEye Data and Generalized Linear Models
Authors: Elfatih M. Abdel-Rahman, Tobias Landmann, Richard Kyalo, George Ong’amo, Bruno Le Ru
Abstract:
Maize (Zea mays L.) is a major staple food crop in Africa, particularly in the eastern region of the continent. The maize growing area in Africa spans over 25 million ha and 84% of rural households in Africa cultivate maize mainly as a means to generate food and income. Average maize yields in Sub Saharan Africa are 1.4 t/ha as compared to global average of 2.5–3.9 t/ha due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In East Africa, yield losses due to stem borers are currently estimated between 12% to 40% of the total production. The objective of the present study was therefore to predict stem borer larvae density in maize fields using RapidEye reflectance data and generalized linear models (GLMs). RapidEye images were captured for a test site in Kenya (Machakos) in January and in February 2015. Stem borer larva numbers were modeled using GLMs assuming Poisson (Po) and negative binomial (NB) distributions with error with log arithmetic link. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were employed to assess the models performance using a leave one-out cross-validation approach. Results showed that NB models outperformed Po ones in all study sites. RMSE and RPD ranged between 0.95 and 2.70, and between 2.39 and 6.81, respectively. Overall, all models performed similar when used the January and the February image data. We conclude that reflectance data from RapidEye data can be used to estimate stem borer larvae density. The developed models could to improve decision making regarding controlling maize stem borers using various integrated pest management (IPM) protocols.Keywords: maize, stem borers, density, RapidEye, GLM
Procedia PDF Downloads 4963545 Quantum Mechanism Approach for Non-Ruin Probability and Comparison of Path Integral Method and Stochastic Simulations
Authors: Ahmet Kaya
Abstract:
Quantum mechanism is one of the most important approaches to calculating non-ruin probability. We apply standard Dirac notation to model given Hamiltonians. By using the traditional method and eigenvector basis, non-ruin probability is found for several examples. Also, non-ruin probability is calculated for two different Hamiltonian by using the tensor product. Finally, the path integral method is applied to the examples and comparison is made for stochastic simulations and path integral calculation.Keywords: quantum physics, Hamiltonian system, path integral, tensor product, ruin probability
Procedia PDF Downloads 3343544 Ultra-Sensitive and Real Time Detection of ZnO NW Using QCM
Authors: Juneseok You, Kuewhan Jang, Chanho Park, Jaeyeong Choi, Hyunjun Park, Sehyun Shin, Changsoo Han, Sungsoo Na
Abstract:
Nanomaterials occur toxic effects to human being or ecological systems. Some sensors have been developed to detect toxic materials and the standard for toxic materials has been established. Zinc oxide nanowire (ZnO NW) is known for toxic material. By ionizing in cell body, ionized Zn ions are overexposed to cell components, which cause critical damage or death. In this paper, we detected ZnO NW in water using QCM (Quartz Crystal Microbalance) and ssDNA (single strand DNA). We achieved 30 minutes of response time for real time detection and 100 pg/mL of limit of detection (LOD).Keywords: zinc oxide nanowire, QCM, ssDNA, toxic material, biosensor
Procedia PDF Downloads 4283543 Secondary Radiation in Laser-Accelerated Proton Beamline (LAP)
Authors: Seyed Ali Mahdipour, Maryam Shafeei Sarvestani
Abstract:
Radiation pressure acceleration (RPA) and target normal sheath acceleration (TNSA) are the most important methods of Laser-accelerated proton beams (LAP) planning systems.LAP has inspired novel applications that can benefit from proton bunch properties different from conventionally accelerated proton beams. The secondary neutron and photon produced in the collision of protons with beamline components are of the important concern in proton therapy. Various published Monte Carlo researches evaluated the beamline and shielding considerations for TNSA method, but there is no studies directly address secondary neutron and photon production from RPA method in LAP. The purpose of this study is to calculate the flux distribution of neutron and photon secondary radiations on the first area ofLAP and to determine the optimize thickness and radius of the energyselector in a LAP planning system based on RPA method. Also, we present the Monte Carlo calculations to determine the appropriate beam pipe for shielding a LAP planning system. The GEANT4 Monte Carlo toolkit has been used to simulate a secondary radiation production in LAP. A section of new multifunctional LAP beamlinehas been proposed, based on the pulsed power solenoid scheme as a GEANT4 toolkit. The results show that the energy selector is the most important source of neutron and photon secondary particles in LAP beamline. According to the calculations, the pure Tungsten energy selector not be the proper case, and using of Tungsten+Polyethylene or Tungsten+Graphitecomposite selectors will reduce the production of neutron and photon intensities by approximately ~10% and ~25%, respectively. Also the optimal radiuses of energy selectors were found to be ~4 cm and ~6 cm for a 3 degree and 5 degree proton deviation angles, respectively.Keywords: neutron, photon, flux distribution, energy selector, GEANT4 toolkit
Procedia PDF Downloads 1033542 Location Quotient Analysis: Case Study
Authors: Seyed Habib A. Rahmati, Mohamad Hasan Sadeghpour, Parsa Fallah Sheikhlari
Abstract:
Location quotient (LQ) is a comparison technique that represents emphasized economic structure of single zone versus the standard area to identify specialty for every zone. In another words, the exact calculation of this metric can show the main core competencies and critical capabilities of an area to the decision makers. This research focus on the exact calculation of the LQ for an Iranian Province called Qazvin and within a case study introduces LQ of the capable industries of Qazvin. Finally, through different graphs and tables, it creates an opportunity to compare the recognized capabilities.Keywords: location quotient, case study, province analysis, core competency
Procedia PDF Downloads 6553541 Analytical Solutions for Geodesic Acoustic Eigenmodes in Tokamak Plasmas
Authors: Victor I. Ilgisonis, Ludmila V. Konovaltseva, Vladimir P. Lakhin, Ekaterina A. Sorokina
Abstract:
The analytical solutions for geodesic acoustic eigenmodes in tokamak plasmas with circular concentric magnetic surfaces are found. In the frame of ideal magnetohydrodynamics the dispersion relation taking into account the toroidal coupling between electrostatic perturbations and electromagnetic perturbations with poloidal mode number |m| = 2 is derived. In the absence of such a coupling the dispersion relation gives the standard continuous spectrum of geodesic acoustic modes. The analysis of the existence of global eigenmodes for plasma equilibria with both off-axis and on-axis maximum of the local geodesic acoustic frequency is performed.Keywords: tokamak, MHD, geodesic acoustic mode, eigenmode
Procedia PDF Downloads 7343540 SQL Generator Based on MVC Pattern
Authors: Chanchai Supaartagorn
Abstract:
Structured Query Language (SQL) is the standard de facto language to access and manipulate data in a relational database. Although SQL is a language that is simple and powerful, most novice users will have trouble with SQL syntax. Thus, we are presenting SQL generator tool which is capable of translating actions and displaying SQL commands and data sets simultaneously. The tool was developed based on Model-View-Controller (MVC) pattern. The MVC pattern is a widely used software design pattern that enforces the separation between the input, processing, and output of an application. Developers take full advantage of it to reduce the complexity in architectural design and to increase flexibility and reuse of code. In addition, we use White-Box testing for the code verification in the Model module.Keywords: MVC, relational database, SQL, White-Box testing
Procedia PDF Downloads 4223539 Modulating Plasmon Induced Transparency in Terahertz Metamaterials
Authors: Gagan Kumar, Koijam M. Devi, Amarendra K. Sarma, Dibakar Roy Chowdhury
Abstract:
Research in metamaterials has been gaining momentum over the past decade owing to its ability in controlling electromagnetic wave properties through careful design at the sub-wavelength scale. The metamaterials have led to several important phenomena which are useful in a variety of applications. One such phenomenon is the electromagnetically induced transparency (EIT) effect in which a narrow transparency region is created in an otherwise absorptive spectrum. In our work, we explore plasmon induced transparency (PIT) in terahertz metamaterials which is analogues to EIT effect. The PIT effect is achieved using the plasmonic metamaterials in which a unit cell is comprised of two C (2C) shaped resonators and a cut-wire (CW). When terahertz wave of a particular polarization is normally incident on the proposed metamaterials geometry, it strongly couples with the cut wire, resulting in the excitation of the bright mode. However due to the specific polarization of the incident beam, the fundamental modes of the C-shaped resonators are not excited by the incident terahertz, hence they are termed as the dark mode. The PIT effect occurs as a result of interference between the bright and the dark mode. In order to observe PIT effect, both the bright and dark modes should have similar resonant frequencies with a little deviation. We further have examined that the PIT window can be modulated by displacing the C-shaped resonators w.r.t. the cut-wire. The numerical observations for different coupling configurations can be explained through an equivalent lumped element circuit model. Moving ahead the PIT effect is further explored in a metamaterial comprising of a cross like structure and four C-shaped resonators. For such configuration, equally strong PIT effect is observed for two orthogonally polarized lights. Therefore, such metamaterials demonstrate a polarization independent PIT response w.r.t the incident terahertz radiation. The proposed study could be significant in the development of slow light devices and polarization independent sensing applications.Keywords: terahertz, metamaterial, split ring resonator, plasmon
Procedia PDF Downloads 2133538 Iron Yoke Dipole with High Quality Field for Collector Ring FAIR
Authors: Tatyana Rybitskaya, Alexandr Starostenko, Kseniya Ryabchenko
Abstract:
Collector ring (CR) of FAIR project is a large acceptance storage ring and field quality plays a major role in the magnet design. The CR will use normal conducting dipole magnets. There will be 24 H-type sector magnets with a maximum field value of 1.6 T. The integrated over the length of the magnet field quality as a function of radius is ∆B.l/B.l = ±1x10⁻⁴. Below 1.6 T the value ∆B.l/B.l can be higher with a linear approximation up to ±2.5x10⁻⁴ at the field level of 0.8 T. An iron-dominated magnet with required field quality is produced with standard technology as the quality is dominated by the yoke geometry.Keywords: conventional magnet, iron yoke dipole, harmonic terms, particle accelerators
Procedia PDF Downloads 1463537 Refractory Cardiac Arrest: Do We Go beyond, Do We Increase the Organ Donation Pool or Both?
Authors: Ortega Ivan, De La Plaza Edurne
Abstract:
Background: Spain and other European countries have implemented Uncontrolled Donation after Cardiac Death (uDCD) programs. After 15 years of experience in Spain, many things have changed. Recent evidence and technical breakthroughs achieved in resuscitation are relevant for uDCD programs and raise some ethical concerns related to these protocols. Aim: To rethink current uDCD programs in the light of recent evidence on available therapeutic procedures applicable to victims of out-of-hospital cardiac arrest (OHCA). To address the following question: What is the current standard of treatment owed to victims of OHCA before including them in an uDCD protocol? Materials and Methods: Review of the scientific and ethical literature related to both uDCD programs and innovative resuscitation techniques. Results: 1) The standard of treatment received and the chances of survival of victims of OHCA depend on whether they are classified as Non-Heart Beating Patients (NHBP) or Non-Heart-Beating-Donors (NHBD). 2) Recent studies suggest that NHBPs are likely to survive, with good quality of life, if one or more of the following interventions are performed while ongoing CPR -guided by suspected or known cause of OHCA- is maintained: a) direct access to a Cath Lab-H24 or/and to extra-corporeal life support (ECLS); b) transfer in induced hypothermia from the Emergency Medical Service (EMS) to the ICU; c) thrombolysis treatment; d) mobile extra-corporeal membrane oxygenation (mini ECMO) instituted as a bridge to ICU ECLS devices. 3) Victims of OHCA who cannot benefit from any of these therapies should be considered as NHBDs. Conclusion: Current uDCD protocols do not take into account recent improvements in resuscitation and need to be adapted. Operational criteria to distinguish NHBDs from NHBP should seek a balance between the technical imperative (to do whatever is possible), considerations about expected survival with quality of life, and distributive justice (costs/benefits). Uncontrolled DCD protocols can be performed in a way that does not hamper the legitimate interests of patients, potential organ donors, their families, the organ recipients, and the health professionals involved in these processes. Families of NHBDs’ should receive information which conforms to the ethical principles of respect of autonomy and transparency.Keywords: uncontrolled donation after cardiac death resuscitation, refractory cardiac arrest, out of hospital cardiac, arrest ethics
Procedia PDF Downloads 2373536 A Fault-Tolerant Full Adder in Double Pass CMOS Transistor
Authors: Abdelmonaem Ayachi, Belgacem Hamdi
Abstract:
This paper presents a fault-tolerant implementation for adder schemes using the dual duplication code. To prove the efficiency of the proposed method, the circuit is simulated in double pass transistor CMOS 32nm technology and some transient faults are voluntary injected in the Layout of the circuit. This fully differential implementation requires only 20 transistors which mean that the proposed design involves 28.57% saving in transistor count compared to standard CMOS technology.Keywords: digital electronics, integrated circuits, full adder, 32nm CMOS tehnology, double pass transistor technology, fault toleance, self-checking
Procedia PDF Downloads 3463535 Digestion Optimization Algorithm: A Novel Bio-Inspired Intelligence for Global Optimization Problems
Authors: Akintayo E. Akinsunmade
Abstract:
The digestion optimization algorithm is a novel biological-inspired metaheuristic method for solving complex optimization problems. The algorithm development was inspired by studying the human digestive system. The algorithm mimics the process of food ingestion, breakdown, absorption, and elimination to effectively and efficiently search for optimal solutions. This algorithm was tested for optimal solutions on seven different types of optimization benchmark functions. The algorithm produced optimal solutions with standard errors, which were compared with the exact solution of the test functions.Keywords: bio-inspired algorithm, benchmark optimization functions, digestive system in human, algorithm development
Procedia PDF Downloads 103534 An Excel-Based Educational Platform for Design Analyses of Pump-Pipe Systems
Authors: Mohamed M. El-Awad
Abstract:
This paper describes an educational platform for design analyses of pump-pipe systems by using Microsoft Excel, its Solver add-in, and the associated VBA programming language. The paper demonstrates the capabilities of the Excel-based platform that suits the iterative nature of the design process better than the use of design charts and data tables. While VBA is used for the development of a user-defined function for determining the standard pipe diameter, Solver is used for optimising the pipe diameter of the pipeline and for determining the operating point of the selected pump.Keywords: design analyses, pump-pipe systems, Excel, solver, VBA
Procedia PDF Downloads 1663533 CPPI Method with Conditional Floor: The Discrete Time Case
Authors: Hachmi Ben Ameur, Jean Luc Prigent
Abstract:
We propose an extension of the CPPI method, which is based on conditional floors. In this framework, we examine in particular the TIPP and margin based strategies. These methods allow keeping part of the past gains and protecting the portfolio value against future high drawdowns of the financial market. However, as for the standard CPPI method, the investor can benefit from potential market rises. To control the risk of such strategies, we introduce both Value-at-Risk (VaR) and Expected Shortfall (ES) risk measures. For each of these criteria, we show that the conditional floor must be higher than a lower bound. We illustrate these results, for a quite general ARCH type model, including the EGARCH (1,1) as a special case.Keywords: CPPI, conditional floor, ARCH, VaR, expected ehortfall
Procedia PDF Downloads 3053532 Methodological Analysis and Exploration of Feminist Planning Research in the Field of Urban and Rural Planning
Authors: Xi Zuo
Abstract:
As a part of the urban population that cannot be ignored, women have long been less involved in urban planning due to socio-economic constraints. Urban planning and development have long been influenced by the mainstream "male standard," paying less attention to women's needs for space in the city. However, with the development of the economy and society and the improvement of women's social status, their participation in urban life is gradually increasing, and their needs for the city are diversifying. Therefore, different scholars, planning designers and governmental departments have explored this field in different degrees and directions. This paper summarizes the research on urban planning from women's perspectives and, discusses its strengths, weaknesses, and methodology with specific case studies, and then further discusses the direction of further research on this topic.Keywords: urban planning, feminism, methodology, gender
Procedia PDF Downloads 803531 A Syntactic Approach to Applied and Socio-Linguistics in Arabic Language in Modern Communications
Authors: Adeyemo Abduljeeel Taiwo
Abstract:
This research is an attempt that creates a conducive atmosphere of a phonological and morphological compendium of Arabic language in Modern Standard Arabic (MSA) for modern day communications. The research is carried out with the chief aim of grammatical analysis of the two broad fields of Arabic linguistics namely: Applied and Socio-Linguistics. It draws a pictorial record of Applied and Socio-Linguistics in Arabic phonology and morphology. Thematically, it postulates and contemplates to a large degree, the theory of concord in contemporary modern Arabic language acquisition. It utilizes an analytical method while it portrays Arabic as a Semitic language that promotes linguistics and syntax among the scholars of the fields.Keywords: Arabic language, applied linguistics, socio-linguistics, modern communications
Procedia PDF Downloads 3313530 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem
Procedia PDF Downloads 2943529 Realization of a (GIS) for Drilling (DWS) through the Adrar Region
Authors: Djelloul Benatiallah, Ali Benatiallah, Abdelkader Harouz
Abstract:
Geographic Information Systems (GIS) include various methods and computer techniques to model, capture digitally, store, manage, view and analyze. Geographic information systems have the characteristic to appeal to many scientific and technical field, and many methods. In this article we will present a complete and operational geographic information system, following the theoretical principles of data management and adapting to spatial data, especially data concerning the monitoring of drinking water supply wells (DWS) Adrar region. The expected results of this system are firstly an offer consulting standard features, updating and editing beneficiaries and geographical data, on the other hand, provides specific functionality contractors entered data, calculations parameterized and statistics.Keywords: GIS, DWS, drilling, Adrar
Procedia PDF Downloads 3093528 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 2483527 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe
Authors: Vipul M. Patel, Hemantkumar B. Mehta
Abstract:
Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant
Procedia PDF Downloads 2923526 The Application of Animal Welfare Certification System for Farm Animal in South Korea
Authors: Ahlyum Mun, Ji-Young Moon, Moon-Seok Yoon, Dong-Jin Baek, Doo-Seok Seo, Oun-Kyong Moon
Abstract:
There is a growing public concern over the standards of farm animal welfare, with higher standards of food safety. In addition, the recent low incidence of Avian Influenza in laying hens among certificated farms is receiving attention. In this study, we introduce animal welfare systems covering the rearing, transport and slaughter of farm animals in South Korea. The concepts of animal welfare farm certification are based on ensuring the five freedoms of animal. The animal welfare is also achieved by observing the condition of environment including shelter and resting area, feeding and water and the care for the animal health. The certification of farm animal welfare is handled by the Animal Protection & Welfare Division of Animal and Plant Quarantine Agency (APQA). Following the full amendment of Animal Protection Law in 2011, animal welfare farm certification program has been implemented since 2012. The certification system has expanded to cover laying hen, swine, broiler, beef cattle and dairy cow, goat and duck farms. Livestock farmers who want to be certified must apply for certification at the APQA. Upon receipt of the application, the APQA notifies the applicant of the detailed schedule of the on-site examination after reviewing the document and conducts the on-site inspection according to the evaluation criteria of the welfare standard. If the on-site audit results meet the certification criteria, APQA issues a certificate. The production process of certified farms is inspected at least once a year for follow-up management. As of 2017, a total of 145 farms have been certified (95 laying hen farms, 12 swine farms, 30 broiler farms and 8 dairy cow farms). In addition, animal welfare transportation vehicles and slaughterhouses have been designated since 2013 and currently 6 slaughterhouses have been certified. Animal Protection Law has been amended so that animal welfare certification marks can be affixed only to livestock products produced by animal welfare farms, transported through animal welfare vehicles and slaughtered at animal welfare slaughterhouses. The whole process including rearing–transportation- slaughtering completes the farm animal welfare system. APQA established its second 5-year animal welfare plan (2014-2019) that includes setting a minimum standard of animal welfare applicable to all livestock farms, transportation vehicles and slaughterhouses. In accordance with this plan, we will promote the farm animal welfare policy in order to truly advance the Korean livestock industry.Keywords: animal welfare, farm animal, certification system, South Korea
Procedia PDF Downloads 3993525 Design & Development of a Static-Thrust Test-Bench for Aviation/UAV Based Piston Engines
Authors: Syed Muhammad Basit Ali, Usama Saleem, Irtiza Ali
Abstract:
Internal combustion engines have been pioneers in the aviation industry, use of piston engines for aircraft propulsion, from propeller-driven bi-planes to turbo-prop, commercial, and cargo airliners. To provide an adequate amount of thrust piston engine rotates the propeller at a specific rpm, allowing enough mass airflow. Thrust is the only forward-acting force of an aircraft that helps heavier than air bodies to fly, depending on the mathematical model and variables included in that with the correct measurement. Test-benches have been a bench-mark in the aerospace industry to analyse the results before a flight, having paramount significance in reliability and safety engineering, depending on the mathematical model and variables included in that with the correct measurement. Calculation of thrust from a piston engine also depends on environmental changes, the diameter of the propeller, and the density of air. The project would be centered on piston engines used in the aviation industry for light aircraft and UAVs. A static thrust test bench involves various units, each performing a designed purpose to monitor and display. Static thrust tests are performed on the ground, and safety concerns hold paramount importance. The execution of this study involves research, design, manufacturing, and results based on reverse engineering initiating from virtual design, analytical analysis, and simulations. The final evaluation of results gathered from various methods such as co-relation between conventional mass-spring and digital loadcell. On average, we received 17.5kg of thrust (25+ engine run-ups – around 40 hours of engine run), only 10% deviation from analytically calculated thrust –providing 90% accuracy.Keywords: aviation, aeronautics, static thrust, test bench, aircraft maintenance
Procedia PDF Downloads 4123524 Photovoltaic Cells Characteristics Measurement Systems
Authors: Rekioua T., Rekioua D., Aissou S., Ouhabi A.
Abstract:
Power provided by the photovoltaic array varies with solar radiation and temperature, since these parameters influence the electrical characteristic (Ipv-Vpv) of solar cells. In Scientific research, there are different methods to obtain these characteristics. In this paper, we present three methods. A simulation one using Matlab/Simulink. The second one is the standard experimental voltage method and the third one is by using LabVIEW software. This latter is based on an electronic circuit to test PV modules. All details of this electronic schemes are presented and obtained results of the three methods with a comparison and under different meteorological conditions are presented. The proposed method is simple and very efficiency for testing and measurements of electrical characteristic curves of photovoltaic panels.Keywords: photovoltaic cells, measurement standards, temperature sensors, data acquisition
Procedia PDF Downloads 461