Search results for: tachograph calibration
299 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example
Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen
Abstract:
Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse
Procedia PDF Downloads 58298 Numerical Investigation of the Electromagnetic Common Rail Injector Characteristics
Authors: Rafal Sochaczewski, Ksenia Siadkowska, Tytus Tulwin
Abstract:
The paper describes the modeling of a fuel injector for common rail systems. A one-dimensional model of a solenoid-valve-controlled injector with Valve Closes Orifice (VCO) spray was modelled in the AVL Hydsim. This model shows the dynamic phenomena that occur in the injector. The accuracy of the calibration, based on a regulation of the parameters of the control valve and the nozzle needle lift, was verified by comparing the numerical results of injector flow rate. Our model is capable of a precise simulation of injector operating parameters in relation to injection time and fuel pressure in a fuel rail. As a result, there were made characteristics of the injector flow rate and backflow.Keywords: common rail, diesel engine, fuel injector, modeling
Procedia PDF Downloads 412297 Effect of Base Coarse Layer on Load-Settlement Characteristics of Sandy Subgrade Using Plate Load Test
Authors: A. Nazeri, R. Ziaie Moayed, H. Ghiasinejad
Abstract:
The present research has been performed to investigate the effect of base course application on load-settlement characteristics of sandy subgrade using plate load test. The main parameter investigated in this study was the subgrade reaction coefficient. The model tests were conducted in a 1.35 m long, 1 m wide, and 1 m deep steel test box of Imam Khomeini International University (IKIU Calibration Chamber). The base courses used in this research were in three different thicknesses of 15 cm, 20 cm, and 30 cm. The test results indicated that in the case of using base course over loose sandy subgrade, the values of subgrade reaction coefficient can be increased from 7 to 132 , 224 , and 396 in presence of 15 cm, 20 cm, and 30 cm base course, respectively.Keywords: modulus of subgrade reaction, plate load test, base course, sandy subgrade
Procedia PDF Downloads 247296 Using High Performance Computing for Online Flood Monitoring and Prediction
Authors: Stepan Kuchar, Martin Golasowski, Radim Vavrik, Michal Podhoranyi, Boris Sir, Jan Martinovic
Abstract:
The main goal of this article is to describe the online flood monitoring and prediction system Floreon+ primarily developed for the Moravian-Silesian region in the Czech Republic and the basic process it uses for running automatic rainfall-runoff and hydrodynamic simulations along with their calibration and uncertainty modeling. It takes a long time to execute such process sequentially, which is not acceptable in the online scenario, so the use of high-performance computing environment is proposed for all parts of the process to shorten their duration. Finally, a case study on the Ostravice river catchment is presented that shows actual durations and their gain from the parallel implementation.Keywords: flood prediction process, high performance computing, online flood prediction system, parallelization
Procedia PDF Downloads 493295 An Active Rectifier with Time-Domain Delay Compensation to Enhance the Power Conversion Efficiency
Authors: Shao-Ku Kao
Abstract:
This paper presents an active rectifier with time-domain delay compensation to enhance the efficiency. A delay calibration circuit is designed to convert delay time to voltage and adaptive control on/off delay in variable input voltage. This circuit is designed in 0.18 mm CMOS process. The input voltage range is from 2 V to 3.6 V with the output voltage from 1.8 V to 3.4 V. The efficiency can maintain more than 85% when the load from 50 Ω ~ 1500 Ω for 3.6 V input voltage. The maximum efficiency is 92.4 % at output power to be 38.6 mW for 3.6 V input voltage.Keywords: wireless power transfer, active diode, delay compensation, time to voltage converter, PCE
Procedia PDF Downloads 282294 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)
Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich
Abstract:
The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian
Procedia PDF Downloads 294293 Lie Symmetry Treatment for Pricing Options with Transactions Costs under the Fractional Black-Scholes Model
Authors: B. F. Nteumagne, E. Pindza, E. Mare
Abstract:
We apply Lie symmetries analysis to price and hedge options in the fractional Brownian framework. The reputation of Lie groups is well spread in the area of Mathematical sciences and lately, in Finance. In the presence of transactions costs and under fractional Brownian motions, analytical solutions become difficult to obtain. Lie symmetries analysis allows us to simplify the problem and obtain new analytical solution. In this paper, we investigate the use of symmetries to reduce the partial differential equation obtained and obtain the analytical solution. We then proposed a hedging procedure and calibration technique for these types of options, and test the model on real market data. We show the robustness of our methodology by its application to the pricing of digital options.Keywords: fractional brownian model, symmetry, transaction cost, option pricing
Procedia PDF Downloads 399292 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 164291 A Hybrid Particle Swarm Optimization-Nelder- Mead Algorithm (PSO-NM) for Nelson-Siegel- Svensson Calibration
Authors: Sofia Ayouche, Rachid Ellaia, Rajae Aboulaich
Abstract:
Today, insurers may use the yield curve as an indicator evaluation of the profit or the performance of their portfolios; therefore, they modeled it by one class of model that has the ability to fit and forecast the future term structure of interest rates. This class of model is the Nelson-Siegel-Svensson model. Unfortunately, many authors have reported a lot of difficulties when they want to calibrate the model because the optimization problem is not convex and has multiple local optima. In this context, we implement a hybrid Particle Swarm optimization and Nelder Mead algorithm in order to minimize by least squares method, the difference between the zero-coupon curve and the NSS curve.Keywords: optimization, zero-coupon curve, Nelson-Siegel-Svensson, particle swarm optimization, Nelder-Mead algorithm
Procedia PDF Downloads 430290 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 114289 Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument
Authors: Danni Cong, Meiping Wu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokuncai, Hao Qin
Abstract:
Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.Keywords: gravity gradient sensor, radial installation limit error, accelerometer, uniaxial rotational modulation
Procedia PDF Downloads 422288 Enhanced Weighted Centroid Localization Algorithm for Indoor Environments
Authors: I. Nižetić Kosović, T. Jagušt
Abstract:
Lately, with the increasing number of location-based applications, demand for highly accurate and reliable indoor localization became urgent. This is a challenging problem, due to the measurement variance which is the consequence of various factors like obstacles, equipment properties and environmental changes in complex nature of indoor environments. In this paper we propose low-cost custom-setup infrastructure solution and localization algorithm based on the Weighted Centroid Localization (WCL) method. Localization accuracy is increased by several enhancements: calibration of RSSI values gained from wireless nodes, repetitive measurements of RSSI to exclude deviating values from the position estimation, and by considering orientation of the device according to the wireless nodes. We conducted several experiments to evaluate the proposed algorithm. High accuracy of ~1m was achieved.Keywords: indoor environment, received signal strength indicator, weighted centroid localization, wireless localization
Procedia PDF Downloads 232287 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 319286 Durable Phantom Production Identical to Breast Tissue for Use in Breast Cancer Detection Research Studies
Authors: Hayrettin Eroglu, Adem Kara
Abstract:
Recently there has been significant attention given to imaging of the biological tissues via microwave imaging techniques. In this study, a phantom for the test and calibration of Microwave imaging used in detecting unhealthy breast structure or tumors was produced by using sol gel method. The liquid and gel phantoms being used nowadays are not durable due to evaporation and their organic ingredients, hence a new design was proposed. This phantom was fabricated from materials that were widely available (water, salt, gelatin, and glycerol) and was easy to make. This phantom was aimed to be better from the ones already proposed in the literature in terms of its durability and stability. S Parameters of phantom was measured with 1-18 GHz Probe Kit and permittivity was calculated via Debye method in “85070” commercial software. One, three, and five-week measurements were taken for this phantom. Finally, it was verified that measurement results were very close to the real biological tissue measurement results.Keywords: phantom, breast tissue, cancer, microwave imaging
Procedia PDF Downloads 355285 An Overview of New Era in Food Science and Technology
Authors: Raana Babadi Fathipour
Abstract:
Strict prerequisites of logical diaries united ought to demonstrate the exploratory information is (in)significant from the statistical point of view and has driven a soak increment within the utilization and advancement of the factual program. It is essential that the utilization of numerical and measurable strategies, counting chemometrics and many other factual methods/algorithms in nourishment science and innovation has expanded steeply within the final 20 a long time. Computational apparatuses accessible can be utilized not as it were to run factual investigations such as univariate and bivariate tests as well as multivariate calibration and improvement of complex models but also to run reenactments of distinctive scenarios considering a set of inputs or essentially making expectations for particular information sets or conditions. Conducting a fast look within the most legitimate logical databases (Pubmed, ScienceDirect, Scopus), it is conceivable to watch that measurable strategies have picked up a colossal space in numerous regions.Keywords: food science, food technology, food safety, computational tools
Procedia PDF Downloads 67284 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 59283 Potential Impacts of Climate Change on Hydrological Droughts in the Limpopo River Basin
Authors: Nokwethaba Makhanya, Babatunde J. Abiodun, Piotr Wolski
Abstract:
Climate change possibly intensifies hydrological droughts and reduces water availability in river basins. Despite this, most research on climate change effects in southern Africa has focused exclusively on meteorological droughts. This thesis projects the potential impact of climate change on the future characteristics of hydrological droughts in the Limpopo River Basin (LRB). The study uses regional climate model (RCM) measurements (from the Coordinated Regional Climate Downscaling Experiment, CORDEX) and a combination of hydrological simulations (using the Soil and Water Assessment Tool Plus model, SWAT+) to predict the impacts at four global warming levels (GWLs: 1.5℃, 2.0℃, 2.5℃, and 3.0℃) under the RCP8.5 future climate scenario. The SWAT+ model was calibrated and validated with a streamflow dataset observed over the basin, and the sensitivity of model parameters was investigated. The performance of the SWAT+LRB model was verified using the Nash-Sutcliffe efficiency (NSE), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and coefficient of determination (R²). The Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Precipitation Index (SPI) have been used to detect meteorological droughts. The Soil Water Index (SSI) has been used to define agricultural drought, while the Water Yield Drought Index (WYLDI), the Surface Run-off Index (SRI), and the Streamflow Index (SFI) have been used to characterise hydrological drought. The performance of the SWAT+ model simulations over LRB is sensitive to the parameters CN2 (initial SCS runoff curve number for moisture condition II) and ESCO (soil evaporation compensation factor). The best simulation generally performed better during the calibration period than the validation period. In calibration and validation periods, NSE is ≤ 0.8, while PBIAS is ≥ ﹣80.3%, RMSE ≥ 11.2 m³/s, and R² ≤ 0.9. The simulations project a future increase in temperature and potential evapotranspiration over the basin, but they do not project a significant future trend in precipitation and hydrological variables. However, the spatial distribution of precipitation reveals a projected increase in precipitation in the southern part of the basin and a decline in the northern part of the basin, with the region of reduced precipitation projected to increase with GWLs. A decrease in all hydrological variables is projected over most parts of the basin, especially over the eastern part of the basin. The simulations predict meteorological droughts (i.e., SPEI and SPI), agricultural droughts (i.e., SSI), and hydrological droughts (i.e., WYLDI, SRI) would become more intense and severe across the basin. SPEI-drought has a greater magnitude of increase than SPI-drought, and agricultural and hydrological droughts have a magnitude of increase between the two. As a result, this research suggests that future hydrological droughts over the LRB could be more severe than the SPI-drought projection predicts but less severe than the SPEI-drought projection. This research can be used to mitigate the effects of potential climate change on basin hydrological drought.Keywords: climate change, CORDEX, drought, hydrological modelling, Limpopo River Basin
Procedia PDF Downloads 128282 Augmented Tourism: Definitions and Design Principles
Authors: Eric Hawkinson
Abstract:
After designing and implementing several iterations of implementations of augmented reality (AR) in tourism, this paper takes a deep look into design principles and implementation strategies of using AR at destination tourism settings. The study looks to define augmented tourism from past implementations as well as several cases, uses designed and implemented for tourism. The discussion leads to formation of frameworks and best practices for AR as well as virtual reality( VR) to be used in tourism settings. Some main affordances include guest autonomy, customized experiences, visitor data collection and increased electronic word-of-mouth generation for promotion purposes. Some challenges found include the need for high levels of technology infrastructure, low adoption rates or ‘buy-in’ rates, high levels of calibration and customization, and the need for maintenance and support services. Some suggestions are given as to how to leverage the affordances and meet the challenges of implementing AR for tourism.Keywords: augmented tourism, augmented reality, eTourism, virtual tourism, tourism design
Procedia PDF Downloads 370281 A Systematic Review of Situational Awareness and Cognitive Load Measurement in Driving
Authors: Aly Elshafei, Daniela Romano
Abstract:
With the development of autonomous vehicles, a human-machine interaction (HMI) system is needed for a safe transition of control when a takeover request (TOR) is required. An important part of the HMI system is the ability to monitor the level of situational awareness (SA) of any driver in real-time, in different scenarios, and without any pre-calibration. Presenting state-of-the-art machine learning models used to measure SA is the purpose of this systematic review. Investigating the limitations of each type of sensor, the gaps, and the most suited sensor and computational model that can be used in driving applications. To the author’s best knowledge this is the first literature review identifying online and offline classification methods used to measure SA, explaining which measurements are subject or session-specific, and how many classifications can be done with each classification model. This information can be very useful for researchers measuring SA to identify the most suited model to measure SA for different applications.Keywords: situational awareness, autonomous driving, gaze metrics, EEG, ECG
Procedia PDF Downloads 119280 Calibration of a Large Standard Step Height with Low Sampled Coherence Scanning Interferometry
Authors: Dahi Ghareab Abdelsalam Ibrahim
Abstract:
Scanning interferometry is commonly used for measuring the three-dimensional profiling of surfaces. Here, we used a scanning stage calibrated with standard gauge blocks to measure a standard step height of 200μm. The stage measures precisely the envelope of interference at the platen and at the surface of the step height. From the difference between the two envelopes, we measured the step height of the sample. Experimental measurements show that the measured value matches well with the nominal value of the step height. A light beam of 532nm from a Tungsten Lamp is collimated and incident on the interferometer. By scanning, two envelopes were produced. The envelope at the platen surface and the envelope at the object surface were determined precisely by a written program code, and then the difference between them was measured from the calibrated scanning stage. The difference was estimated to be in the range of 198 ± 2 μm.Keywords: optical metrology, digital holography, interferometry, phase unwrapping
Procedia PDF Downloads 73279 Encapsulation of Volatile Citronella Essential oil by Coacervation: Efficiency and Release Kinetic Study
Authors: Rafeqah Raslan, Mastura AbdManaf, Junaidah Jai, Istikamah Subuki, Ana Najwa Mustapa
Abstract:
The volatile citronella essential oil was encapsulated by simple coacervation and complex coacervation using gum Arabic and gelatin as wall material. Glutaraldehyde was used in the methodology as crosslinking agent. The citronella standard calibration graph was developed with R2 equal to 0.9523 for the accurate determination of encapsulation efficiency and release study. The release kinetic was analyzed based on Fick’s law of diffusion for polymeric system and linear graph of log fraction release over log time was constructed to determine the release rate constant, k and diffusion coefficient, n. Both coacervation methods in the present study produce encapsulation efficiency around 94%. The capsules morphology analysis supported the release kinetic mechanisms of produced capsules for both coacervation process.Keywords: simple coacervation, complex coacervation, encapsulation efficiency, release kinetic study
Procedia PDF Downloads 316278 Analysis of Hydraulic Velocity in Fishway Using CCHE2D Model
Authors: Amir Abbas Kamanbedast, Masood Mohammad Shafipor, Amir Ghotboddin
Abstract:
Fish way is a structure that in generally using to migrate to the place where they are spawned and is made near the spillway. Preventing fish spawning or migrating to their original place by fishway structures can affect their lives in the river or even erase one access to intended environment. The main objective of these structures is establishing a safe path for fish migration. In the present study first the hydraulic specifications of Hamidieh diversion dam were assessed and then it is problems were evaluated. In this study the dimensions of the fish way, including velocity of pools, were evaluated by CCHE2D software. Then by change slope in this structure streamlines like velocity in the pools were measured. For calibration can be use measuring local velocities in some pools. The information can be seen the fishway width of 0.3 m has minimum rate of descent in the total number of structures (pools and overflow).Keywords: fishway, velocity, Hamidieh-Diversion Dam, CCHE2D model
Procedia PDF Downloads 495277 Interface Fracture of Sandwich Composite Influenced by Multiwalled Carbon Nanotube
Authors: Alak Kumar Patra, Nilanjan Mitra
Abstract:
Higher strength to weight ratio is the main advantage of sandwich composite structures. Interfacial delamination between the face sheet and core is a major problem in these structures. Many research works are devoted to improve the interfacial fracture toughness of composites majorities of which are on nano and laminated composites. Work on influence of multiwalled carbon nano-tubes (MWCNT) dispersed resin system on interface fracture of glass-epoxy PVC core sandwich composite is extremely limited. Finite element study is followed by experimental investigation on interface fracture toughness of glass-epoxy (G/E) PVC core sandwich composite with and without MWCNT. Results demonstrate an improvement in interface fracture toughness values (Gc) of samples with a certain percentages of MWCNT. In addition, dispersion of MWCNT in epoxy resin through sonication followed by mixing of hardener and vacuum resin infusion (VRI) technology used in this study is an easy and cost effective methodology in comparison to previously adopted other methods limited to laminated composites. The study also identifies the optimum weight percentage of MWCNT addition in the resin system for maximum performance gain in interfacial fracture toughness. The results agree with finite element study, high-resolution transmission electron microscope (HRTEM) analysis and fracture micrograph of field emission scanning electron microscope (FESEM) investigation. Interface fracture toughness (GC) of the DCB sandwich samples is calculated using the compliance calibration (CC) method considering the modification due to shear. Compliance (C) vs. crack length (a) data of modified sandwich DCB specimen is fitted to a power function of crack length. The calculated mean value of the exponent n from the plots of experimental results is 2.22 and is different from the value (n=3) prescribed in ASTM D5528-01for mode 1 fracture toughness of laminate composites (which is the basis for modified compliance calibration method). Differentiating C with respect to crack length (a) and substituting it in the expression GC provides its value. The research demonstrates improvement of 14.4% in peak load carrying capacity and 34.34% in interface fracture toughness GC for samples with 1.5 wt% MWCNT (weight % being taken with respect to weight of resin) in comparison to samples without MWCNT. The paper focuses on significant improvement in experimentally determined interface fracture toughness of sandwich samples with MWCNT over the samples without MWCNT using much simpler method of sonication. Good dispersion of MWCNT was observed in HRTEM with 1.5 wt% MWCNT addition in comparison to other percentages of MWCNT. FESEM studies have also demonstrated good dispersion and fiber bridging of MWCNT in resin system. Ductility is also observed to be higher for samples with MWCNT in comparison to samples without.Keywords: carbon nanotube, epoxy resin, foam, glass fibers, interfacial fracture, sandwich composite
Procedia PDF Downloads 303276 Chemometric Estimation of Inhibitory Activity of Benzimidazole Derivatives by Linear Least Squares and Artificial Neural Networks Modelling
Authors: Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević, Lidija R. Jevrić, Stela Jokić
Abstract:
The subject of this paper is to correlate antibacterial behavior of benzimidazole derivatives with their molecular characteristics using chemometric QSAR (Quantitative Structure–Activity Relationships) approach. QSAR analysis has been carried out on the inhibitory activity of benzimidazole derivatives against Staphylococcus aureus. The data were processed by linear least squares (LLS) and artificial neural network (ANN) procedures. The LLS mathematical models have been developed as a calibration models for prediction of the inhibitory activity. The quality of the models was validated by leave one out (LOO) technique and by using external data set. High agreement between experimental and predicted inhibitory acivities indicated the good quality of the derived models. These results are part of the CMST COST Action No. CM1306 "Understanding Movement and Mechanism in Molecular Machines".Keywords: Antibacterial, benzimidazoles, chemometric, QSAR.
Procedia PDF Downloads 316275 Energy Refurbishment of University Building in Cold Italian Climate: Energy Audit and Performance Optimization
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The Directive 2010/31/EC 'Directive of the European Parliament and of the Council of 19 may 2010 on the energy performance of buildings' moved the targets of the previous version toward more ambitious targets, for instance by establishing that, by 31 December 2020, all new buildings should demand nearly zero-energy. Moreover, the demonstrative role of public buildings is strongly affirmed so that also the target nearly zero-energy buildings is anticipated, in January 2019. On the other hand, given the very low turn-over rate of buildings (in Europe, it ranges between 1-3%/yearly), each policy that does not consider the renovation of the existing building stock cannot be effective in the short and medium periods. According to this proposal, the study provides a novel, holistic approach to design the refurbishment of educational buildings in colder cities of Mediterranean regions enabling stakeholders to understand the uncertainty to use numerical modelling and the real environmental and economic impacts of adopting some energy efficiency technologies. The case study is a university building of Molise region in the centre of Italy. The proposed approach is based on the application of the cost-optimal methodology as it is shown in the Delegate Regulation 244/2012 and Guidelines of the European Commission, for evaluating the cost-optimal level of energy performance with a macroeconomic approach. This means that the refurbishment scenario should correspond to the configuration that leads to lowest global cost during the estimated economic life-cycle, taking into account not only the investment cost but also the operational costs, linked to energy consumption and polluting emissions. The definition of the reference building has been supported by various in-situ surveys, investigations, evaluations of the indoor comfort. Data collection can be divided into five categories: 1) geometrical features; 2) building envelope audit; 3) technical system and equipment characterization; 4) building use and thermal zones definition; 5) energy building data. For each category, the required measures have been indicated with some suggestions for the identifications of spatial distribution and timing of the measurements. With reference to the case study, the collected data, together with a comparison with energy bills, allowed a proper calibration of a numerical model suitable for the hourly energy simulation by means of EnergyPlus. Around 30 measures/packages of energy, efficiency measure has been taken into account both on the envelope than regarding plant systems. Starting from results, two-point will be examined exhaustively: (i) the importance to use validated models to simulate the present performance of building under investigation; (ii) the environmental benefits and the economic implications of a deep energy refurbishment of the educational building in cold climates.Keywords: energy simulation, modelling calibration, cost-optimal retrofit, university building
Procedia PDF Downloads 178274 Liquid Chromatographic Determination of Alprazolam with ACE Inhibitors in Bulk, Respective Pharmaceutical Products and Human Serum
Authors: Saeeda Nadir Ali, Najma Sultana, Muhammad Saeed Arayne, Amtul Qayoom
Abstract:
Present study describes a simple and a fast liquid chromatographic method using ultraviolet detector for simultaneous determination of anxiety relief medicine alprazolam with ACE inhibitors i.e; lisinopril, captopril and enalapril employing purospher star C18 (25 cm, 0.46 cm, 5 µm). Separation was achieved within 5 min at ambient temperature via methanol: water (8:2 v/v) with pH adjusted to 2.9, monitoring the detector response at 220 nm. Optimum parameters were set up as per ICH (2006) guidelines. Calibration range was found out to be 0.312-10 µg mL-1 for alprazolam and 0.625-20 µg mL-1 for all the ACE inhibitors with correlation coefficients > 0.998 and detection limits 85, 37, 68 and 32 ng mL-1 for lisinopril, captopril, enalapril and alprazolam respectively. Intra-day, inter-day precision and accuracy of the assay were in acceptable range of 0.05-1.62% RSD and 98.85-100.76% recovery. Method was determined to be robust and effectively useful for the estimation of studied drugs in dosage formulations and human serum without obstruction of excipients or serum components.Keywords: alprazolam, ACE inhibitors, RP HPLC, serum
Procedia PDF Downloads 515273 Highly Linear and Low Noise AMR Sensor Using Closed Loop and Signal-Chopped Architecture
Authors: N. Hadjigeorgiou, A. C. Tsalikidou, E. Hristoforou, P. P. Sotiriadis
Abstract:
During the last few decades, the continuously increasing demand for accurate and reliable magnetic measurements has paved the way for the development of different types of magnetic sensing systems as well as different measurement techniques. Sensor sensitivity and linearity, signal-to-noise ratio, measurement range, cross-talk between sensors in multi-sensor applications are only some of the aspects that have been examined in the past. In this paper, a fully analog closed loop system in order to optimize the performance of AMR sensors has been developed. The operation of the proposed system has been tested using a Helmholtz coil calibration setup in order to control both the amplitude and direction of magnetic field in the vicinity of the AMR sensor. Experimental testing indicated that improved linearity of sensor response, as well as low noise levels can be achieved, when the system is employed.Keywords: AMR sensor, closed loop, memory effects, chopper, linearity improvement, sensitivity improvement, magnetic noise, electronic noise
Procedia PDF Downloads 362272 Measuring Technology of Airship Propeller Thrust and Torque in China Academy of Aerospace Aerodynamics
Authors: Ma Hongqiang, Yang Hui, Wen Haoju, Feng Jiabo, Bi Zhixian, Nie Ying
Abstract:
In order to measure thrust and torque of airship propeller, a two-component balance and data acquisition system was developed in China Academy of Aerospace Aerodynamics(CAAA) in early time. During the development, some problems were encountered. At first, the measuring system and its protective parts made the weight of whole system increase significantly. Secondly, more parts might induce more failures, so the reliability of the system was decreased. In addition, the rigidity of the system was lowered, and the structure was more possible to vibrate. Therefore, CAAA and the Academy of Opto-Electronics, Chinese Academy of Science(AOECAS) developed a new technology, use the propeller supporting rack as a spring element, attach strain gages onto it, sum up as a generalized balance. And new math models, new calibration methods and new load determining methods were developed.Keywords: airship, propeller, thrust and torque, flight test
Procedia PDF Downloads 356271 Monitoring Three-Dimensional Models of Tree and Forest by Using Digital Close-Range Photogrammetry
Authors: S. Y. Cicekli
Abstract:
In this study, tree-dimensional model of tree was created by using terrestrial close range photogrammetry. For this close range photos were taken. Photomodeler Pro 5 software was used for camera calibration and create three-dimensional model of trees. In first test, three-dimensional model of a tree was created, in the second test three-dimensional model of three trees were created. This study aim is creating three-dimensional model of trees and indicate the use of close-range photogrammetry in forestry. At the end of the study, three-dimensional model of tree and three trees were created. This study showed that usability of close-range photogrammetry for monitoring tree and forests three-dimensional model.Keywords: close- range photogrammetry, forest, tree, three-dimensional model
Procedia PDF Downloads 389270 Performance Evaluation of Al Jame’s Roundabout Using SIDRA
Authors: D. Muley, H. S. Al-Mandhari
Abstract:
This paper evaluates the performance of a multi-lane four-legged modern roundabout operating in Muscat using SIDRA model. The performance measures include Degree of Saturation (DOS), average delay, and queue lengths. The geometric and traffic data were used for model preparation. Gap acceptance parameters, critical gap, and follow-up headway were used for calibration of SIDRA model. The results from the analysis showed that currently the roundabout is experiencing delays up to 610 seconds with DOS 1.67 during peak hour. Further, sensitivity analysis for general and roundabout parameters was performed, amongst lane width, cruise speed, inscribed diameter, entry radius, and entry angle showed that inscribed diameter is the most crucial factor affecting delay and DOS. Upgradation of the roundabout to the fully signalized junction was found as the suitable solution which will serve for future years with LOS C for design year having DOS of 0.9 with average control delay of 51.9 seconds per vehicle.Keywords: performance analysis, roundabout, sensitivity analysis, SIDRA
Procedia PDF Downloads 382