Search results for: density estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5256

Search results for: density estimation

3336 Demand and Supply Management for Electricity Markets: Econometric Analysis of Electricity Prices

Authors: Ioana Neamtu

Abstract:

This paper investigates the potential for demand-side management for the system price in the Nordic electricity market and the price effects of introducing wind-power into the system. The model proposed accounts for the micro-structure of the Nordic electricity market by modeling each hour individually, while still accounting for the relationship between the hours within a day. This flexibility allows us to explore the differences between peak and shoulder demand hours. Preliminary results show potential for demand response management, as indicated by the price elasticity of demand as well as a small but statistically significant decrease in price, given by the wind power penetration. Moreover, our study shows that these effects are stronger during day-time and peak hours,compared to night-time and shoulder hours.

Keywords: structural model, GMM estimation, system of equations, electricity market

Procedia PDF Downloads 435
3335 External Sulphate Attack: Advanced Testing and Performance Specifications

Authors: G. Massaad, E. Roziere, A. Loukili, L. Izoret

Abstract:

Based on the monitoring of mass, hydrostatic weighing, and the amount of leached OH- we deduced the nature of leached and precipitated minerals, the amount of lost aggregates and the evolution of porosity and cracking during the sulphate attack. Using these information, we are able to draw the volume / mass changes brought by mineralogical variations and cracking of the cement matrix. Then we defined a new performance indicator, the averaged density, capable to resume along the test of sulphate attack the occurred physicochemical variation occurred in the cementitious matrix and then highlight.

Keywords: monitoring strategy, performance indicator, sulphate attack, mechanism of degradation

Procedia PDF Downloads 319
3334 On the Strong Solutions of the Nonlinear Viscous Rotating Stratified Fluid

Authors: A. Giniatoulline

Abstract:

A nonlinear model of the mathematical fluid dynamics which describes the motion of an incompressible viscous rotating fluid in a homogeneous gravitational field is considered. The model is a generalization of the known Navier-Stokes system with the addition of the Coriolis parameter and the equations for changeable density. An explicit algorithm for the solution is constructed, and the proof of the existence and uniqueness theorems for the strong solution of the nonlinear problem is given. For the linear case, the localization and the structure of the spectrum of inner waves are also investigated.

Keywords: Galerkin method, Navier-Stokes equations, nonlinear partial differential equations, Sobolev spaces, stratified fluid

Procedia PDF Downloads 307
3333 Low Cost Inertial Sensors Modeling Using Allan Variance

Authors: A. A. Hussen, I. N. Jleta

Abstract:

Micro-electromechanical system (MEMS) accelerometers and gyroscopes are suitable for the inertial navigation system (INS) of many applications due to the low price, small dimensions and light weight. The main disadvantage in a comparison with classic sensors is a worse long term stability. The estimation accuracy is mostly affected by the time-dependent growth of inertial sensor errors, especially the stochastic errors. In order to eliminate negative effect of these random errors, they must be accurately modeled. Where the key is the successful implementation that depends on how well the noise statistics of the inertial sensors is selected. In this paper, the Allan variance technique will be used in modeling the stochastic errors of the inertial sensors. By performing a simple operation on the entire length of data, a characteristic curve is obtained whose inspection provides a systematic characterization of various random errors contained in the inertial-sensor output data.

Keywords: Allan variance, accelerometer, gyroscope, stochastic errors

Procedia PDF Downloads 440
3332 Hydrogen Production By Photoreforming Of n-Butanol And Structural Isomers Over Pt Doped Titanate Catalyst

Authors: Hristina Šalipur, Jasmina Dostanić, Davor Lončarević, Matej Huš

Abstract:

Photocatalytic water splitting/alcohol photoreforming has been used for the conversion of sunlight energy in the process of hydrogen production due to its sustainability, environmental safety, effectiveness and simplicity. Titanate nanotubes are frequently studied materials since they combine the properties of photo-active semiconductors with the properties of layered titanates, such as the ion-exchange ability. Platinum (Pt) doping into titanate structure has been considered an effective strategy in better separation efficiency of electron-hole pairs and lowering the overpotential for hydrogen production, which results in higher photocatalytic activity. In our work, Pt doped titanate catalysts were synthesized via simple alkaline hydrothermal treatment, incipient wetness impregnation method and temperature-programmed reduction. The structural, morphological and optical properties of the prepared catalysts were investigated using various characterization techniques such as X-ray diffraction (XRD), scanning electron microscopy (SEM), N2 physisorption, and diffuse reflectance spectroscopy (DRS). The activities of the prepared Pt-doped titanate photocatalysts were tested for hydrogen production via photocatalytic water splitting/alcohol photoreforming process under simulated solar light irradiation. Characterization of synthesized Pt doped titanate catalysts showed crystalline anatase phase, preserved nanotubular structure and high specific surface area. The result showed enhancement of activity in photocatalytic water splitting/alcohol photoreforming in the following order 2-butanol>1-butanol>tert-butanol, with obtained maximal hydrogen production rate of 7.5, 5.3 and 2 mmol g-1 h-1, respectively. Different possible factors influencing the hole scavenging ability, such as hole scavenger redox potential and diffusivity, adsorption and desorption rate of the hole scavenger on the surface and stability of the alcohol radical species generated via hole scavenging, were investigated. The theoretical evaluation using density functional theory (DFT) further elucidated the reaction kinetics and detailed mechanism of photocatalytic water splitting/alcohol photoreforming.

Keywords: hydrogen production, platinum, semiconductor, water splitting, density functional theory

Procedia PDF Downloads 112
3331 Effect of Pioglitazone on Intracellular Na+ Homeostasis in Metabolic Syndrome-Induced Cardiomyopathy in Male Rats

Authors: Ayca Bilginoglu, Belma Turan

Abstract:

Metabolic syndrome, is associated impaired blood glucose level, insulin resistance, dyslipidemia caused by abdominal obesity. Also, it is related with cardiovascular risk accumulation and cardiomyopathy. The hypothesis of this study was to examine the effect of thiazolidinediones such as pioglitazone which is widely used insulin-sensitizing agents that improve glycemic control, on intracellular Na+ homeostasis in metabolic syndrome-induced cardiomyopathy in male rats. Male Wistar-Albino rats were randomly divided into three groups, namely control (Con, n=7), metabolic syndrome (MetS, n=7) and pioglitazone treated metabolic syndrome group (MetS+PGZ, n=7). Metabolic syndrome was induced by providing drinking water that was 32% sucrose, for 18 weeks. All of the animals were exposed to a 12 h light – 12 h dark cycle. Abdominal obesity and glucose intolerance had measured as a marker of metabolic syndrome. Intracellular Na+ ([Na+]i) is an important modulator of excitation–contraction coupling in heart. [Na+]i at rest and [Na+]i during pacing with electrical field stimulation in 0.2 Hz, 0.8 Hz, 2.0 Hz stimulation frequency were recorded in cardiomyocytes. Also, Na+ channel current (INa) density and I-V curve were measured to understand [Na+]i homeostasis. In results, high sucrose intake, as well as the normal daily diet, significantly increased body mass and blood glucose level of the rats in the metabolic syndrome group as compared with the non-treated control group. In MetS+PZG group, the blood glucose level and body inclined to decrease to the Con group. There was a decrease in INa density and there was a shift both activation and inactivation curve of INa. Pioglitazone reversed the shift to the control side. Basal [Na+]i either MetS and Con group were not significantly different, but there was a significantly increase in [Na+]i in stimulated cardiomyocytes in MetS group. Furthermore, pioglitazone had not effect on basal [Na+]i but it reversed the increase in [Na+]i in stimulated cardiomyocytes to the that of Con group. Results of the present study suggest that pioglitazone has a significant effect on the Na+ homeostasis in the metabolic syndrome induced cardiomyopathy in rats. All animal procedures and experiments were approved by the Animal Ethics Committee of Ankara University Faculty of Medicine (2015-2-37).

Keywords: insulin resistance, intracellular sodium, metabolic syndrome, sodium current

Procedia PDF Downloads 284
3330 Evaluation of Parameters of Subject Models and Their Mutual Effects

Authors: A. G. Kovalenko, Y. N. Amirgaliyev, A. U. Kalizhanova, L. S. Balgabayeva, A. H. Kozbakova, Z. S. Aitkulov

Abstract:

It is known that statistical information on operation of the compound multisite system is often far from the description of actual state of the system and does not allow drawing any conclusions about the correctness of its operation. For example, from the world practice of operation of systems of water supply, water disposal, it is known that total measurements at consumers and at suppliers differ between 40-60%. It is connected with mathematical measure of inaccuracy as well as ineffective running of corresponding systems. Analysis of widely-distributed systems is more difficult, in which subjects, which are self-maintained in decision-making, carry out economic interaction in production, act of purchase and sale, resale and consumption. This work analyzed mathematical models of sellers, consumers, arbitragers and the models of their interaction in the provision of dispersed single-product market of perfect competition. On the basis of these models, the methods, allowing estimation of every subject’s operating options and systems as a whole are given.

Keywords: dispersed systems, models, hydraulic network, algorithms

Procedia PDF Downloads 283
3329 Characteristics of Sorghum (Sorghum bicolor L. Moench) Flour on the Soaking Time of Peeled Grains and Particle Size Treatment

Authors: Sri Satya Antarlina, Elok Zubaidah, Teti Istiana, Harijono

Abstract:

Sorghum bicolor (Sorghum bicolor L. Moench) has the potential as a flour for gluten-free food products. Sorghum flour production needs grain soaking treatment. Soaking can reduce the tannin content which is an anti-nutrient, so it can increase the protein digestibility. Fine particle size decreases the yield of flour, so it is necessary to study various particle sizes to increase the yield. This study aims to determine the characteristics of sorghum flour in the treatment of soaking peeled grain and particle size. The material of white sorghum varieties KD-4 from farmers in East Java, Indonesia. Factorial randomized factorial design (two factors), repeated three times, factor I were the time of grain soaking (five levels) that were 0, 12, 24, 36, and 48 hours, factor II was the size of the starch particles sifted with a fineness level of 40, 60, 80, and 100 mesh. The method of making sorghum flour is grain peeling, soaking peeled grain, drying using the oven at 60ᵒC, milling, and sieving. Physico-chemical analysis of sorghum flour. The results show that there is an interaction between soaking time of grain with the size of sorghum flour particles. Interaction in yield of flour, L* color (brightness level), whiteness index, paste properties, amylose content, protein content, bulk density, and protein digestibility. The method of making sorghum flour through the soaking of peeled grain and the difference in particle size has an important role in producing the physicochemical properties of the specific flour. Based on the characteristics of sorghum flour produced, it is determined the method of making sorghum flour through sorghum grain soaking for 24 hours, the particle size of flour 80 mesh. The sorghum flour with characteristic were 24.88% yield of flour, 88.60 color L* (brightness level), 69.95 whiteness index, 3615 Cp viscosity, 584.10 g/l of bulk density, 24.27% db protein digestibility, 90.02% db starch content, 23.4% db amylose content, 67.45% db amylopectin content, 0.22% db crude fiber content, 0.037% db tannin content, 5.30% db protein content, ash content 0.18% db, carbohydrate content 92.88 % db, and 1.94% db fat content. The sorghum flour is recommended for cookies products.

Keywords: characteristic, sorghum (Sorghum bicolor L. Moench) flour, grain soaking, particle size, physicochemical properties

Procedia PDF Downloads 160
3328 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure

Authors: Sara Saboonian, Pierre Filion

Abstract:

The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.

Keywords: ecosystem services, green infrastructure, intensification, planning

Procedia PDF Downloads 355
3327 Reducing the Incidence Rate of Pressure Sore in a Medical Center in Taiwan

Authors: Chang Yu Chuan

Abstract:

Background and Aim: Pressure sore is not only the consequence of any gradual damage of the skin leading to tissue defects but also an important indicator of clinical care. If hospitalized patients develop pressure sores without proper care, it would result in delayed healing, wound infection, increase patient physical pain, prolonged hospital stay and even death, which would have a negative impact on the quality of care and also increase nursing manpower and medical costs. This project is aimed at decreasing the incidence of pressure sore in one ward of internal medicine. Our data showed 53 cases (0.61%) of pressure sore in 2015, which exceeded the average (0.5%) of Taiwan Clinical Performance Indicator (TCPI) for medical centers. The purpose of this project is to reduce the incidence rate of pressure sore in the ward. After data collection and analysis from January to December 2016, the reasons of developing pressure sore were found: 1. Lack of knowledge to prevent pressure among nursing staffs; 2. No relevant courses about preventing pressure ulcers and pressure wound care being held in this unit; 3. Low complete rate of pressure sore care education that family members should receive from nursing staffs; 4. Decompression equipment is not enough; 5. Lack of standard procedures for body-turning and positioning care. After team members brainstorming, several strategies were proposed, including holding in-service education, pressure sore care seed training, purchasing decompression mattress and memory pillows, designing more elements of health education tools, such as health education pamphlet, posters and multimedia films of body-turning and positioning demonstration, formulation and promotion of standard operating procedures. In this way, nursing staffs can understand the body-turning and positioning guidelines for pressure sore prevention and enhance the quality of care. After the implementation of this project, the pressure sore density significantly decreased from 0.61%(53 cases) to 0.45%(28 cases) in this ward. The project shows good results and good example for nurses working at the ward and helps to enhance quality of care.

Keywords: body-turning and positioning, incidence density, nursing, pressure sore

Procedia PDF Downloads 266
3326 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL

Procedia PDF Downloads 353
3325 Anti-lipidemic and Hematinic Potentials of Moringa Oleifera Leaves: A Clinical Trial on Type 2 Diabetic Subjects in a Rural Nigerian Community

Authors: Ifeoma C. Afiaenyi, Elizabeth K. Ngwu, Rufina N. B. Ayogu

Abstract:

Diabetes has crept into the rural areas of Nigeria, causing devastating effects on its sufferers; most of them could not afford diabetic medications. Moringa oleifera has been used extensively in animal models to demonstrate its antilipidaemic and haematinic qualities; however, there is a scarcity of data on the effect of graded levels of Moringa oleifera leaves on the lipid profile and hematological parameters in human diabetic subjects. The study determined the effect of Moringa oleifera leaves on the lipid profile and hematological parameters of type 2 diabetic subjects in Ukehe, a rural Nigerian community. Twenty-four adult male and female diabetic subjects were purposively selected for the study. These subjects were shared into four groups of six subjects each. The diets used in the study were isocaloric. A control group (diabetics, group 1) was fed diets without Moringa oleifera leaves. Experimental groups 2, 3 and 4 received 20g, 40g and 60g of Moringa oleifera leaves daily, respectively, in addition to the diets. The subjects' lipid profile and hematological parameters were measured prior to the feeding trial and at the end of the feeding trial. The feeding trial lasted for fourteen days. The data obtained were analyzed using the computer program Statistical Product for Service Solution (SPSS) for windows version 21. A Paired-samples t-test was used to compare the means of values collected before and after the feeding trial within the groups and significance was accepted at p < 0.05. There was a non-significant (p > 0.05) decrease in the mean total cholesterol of the subjects in groups 1, 2 and 3 after the feeding trial. There was a non-significant (p > 0.05) decrease in the mean triglyceride levels of the subjects in group 1 after the feeding trial. Groups 1 and 3 subjects had a non-significant (p > 0.05) decrease in their mean low-density lipoprotein (LDL) cholesterol after the feeding trial. Groups 1, 2 and 4 had a significant (p < 0.05) increase in their mean high-density lipoprotein (HDL) cholesterol after the feeding trial. A significant (p < 0.05) decrease in the mean hemoglobin level was observed only in group 4 subjects. Similarly, there was a significant (p < 0.05) decrease in the mean packed cell volume of group 4 subjects. It was only in group 4 that a significant (p < 0.05) decrease in the mean white blood cells of the subjects was also observed. The changes observed in the parameters assessed were not dose-dependent. Therefore, a similar study of longer duration and more samples is imperative to authenticate these results.

Keywords: anemia, diabetic subjects, lipid profile, moringa oleifera

Procedia PDF Downloads 199
3324 Robust Variable Selection Based on Schwarz Information Criterion for Linear Regression Models

Authors: Shokrya Saleh A. Alshqaq, Abdullah Ali H. Ahmadini

Abstract:

The Schwarz information criterion (SIC) is a popular tool for selecting the best variables in regression datasets. However, SIC is defined using an unbounded estimator, namely, the least-squares (LS), which is highly sensitive to outlying observations, especially bad leverage points. A method for robust variable selection based on SIC for linear regression models is thus needed. This study investigates the robustness properties of SIC by deriving its influence function and proposes a robust SIC based on the MM-estimation scale. The aim of this study is to produce a criterion that can effectively select accurate models in the presence of vertical outliers and high leverage points. The advantages of the proposed robust SIC is demonstrated through a simulation study and an analysis of a real dataset.

Keywords: influence function, robust variable selection, robust regression, Schwarz information criterion

Procedia PDF Downloads 139
3323 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 134
3322 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 193
3321 Combining the Dynamic Conditional Correlation and Range-GARCH Models to Improve Covariance Forecasts

Authors: Piotr Fiszeder, Marcin Fałdziński, Peter Molnár

Abstract:

The dynamic conditional correlation model of Engle (2002) is one of the most popular multivariate volatility models. However, this model is based solely on closing prices. It has been documented in the literature that the high and low price of the day can be used in an efficient volatility estimation. We, therefore, suggest a model which incorporates high and low prices into the dynamic conditional correlation framework. Empirical evaluation of this model is conducted on three datasets: currencies, stocks, and commodity exchange-traded funds. The utilisation of realized variances and covariances as proxies for true variances and covariances allows us to reach a strong conclusion that our model outperforms not only the standard dynamic conditional correlation model but also a competing range-based dynamic conditional correlation model.

Keywords: volatility, DCC model, high and low prices, range-based models, covariance forecasting

Procedia PDF Downloads 181
3320 ANFIS Approach for Locating Faults in Underground Cables

Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat

Abstract:

This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.

Keywords: ANFIS, fault location, underground cable, wavelet transform

Procedia PDF Downloads 510
3319 Competitiveness of African Countries through Open Quintuple Helix Model

Authors: B. G. C. Ahodode, S. Fekkaklouhail

Abstract:

Following the triple helix theory, this study aims to evaluate the innovation system effect on African countries’ competitiveness by taking into account external contributions; according to the extent that developing countries (especially African countries) are characterized by weak innovation systems whose synergy operates more at the foreign level than domestic and global. To do this, we used the correlation test, parsimonious regression techniques, and panel estimation between 2013 and 2016. Results show that the degree of innovation synergy has a significant effect on competitiveness in Africa. Specifically, while the opening system (OPESYS) and social system (SOCSYS) contribute respectively in importance order to 0.634 and 0.284 (at 1%) significant points of increase in the GCI, the political system (POLSYS) and educational system (EDUSYS) only increase it to 0.322 and 0.169 at 5% significance level while the effect of the economic system (ECOSYS) is not significant on Global Competitiveness Index.

Keywords: innovation system, innovation, competitiveness, Africa

Procedia PDF Downloads 68
3318 Shear Strength Characteristics of Sand Mixed with Particulate Rubber

Authors: Firas Daghistani, Hossam Abuel Naga

Abstract:

Waste tyres is a global problem that has a negative effect on the environment, where there are approximately one billion waste tyres discarded worldwide yearly. Waste tyres are discarded in stockpiles, where they provide harm to the environment in many ways. Finding applications to these materials can help in reducing this global problem. One of these applications is recycling these waste materials and using them in geotechnical engineering. Recycled waste tyre particulates can be mixed with sand to form a lightweight material with varying shear strength characteristics. Contradicting results were found in the literature on the inclusion of particulate rubber to sand, where some experiments found that the inclusion of particulate rubber can increase the shear strength of the mixture, while other experiments stated that the addition of particulate rubber decreases the shear strength of the mixture. This research further investigates the inclusion of particulate rubber to sand and whether it can increase or decrease the shear strength characteristics of the mixture. For the experiment, a series of direct shear tests were performed on a poorly graded sand with a mean particle size of 0.32 mm mixed with recycled poorly graded particulate rubber with a mean particle size of 0.51 mm. The shear tests were performedon four normal stresses 30, 55, 105, 200 kPa at a shear rate of 1 mm/minute. Different percentages ofparticulate rubber content were used in the mixture i.e., 10%, 20%, 30% and 50% of sand dry weight at three density states, namely loose, slight dense, and dense state. The size ratio of the mixture,which is the mean particle size of the particulate rubber divided by the mean particle size of the sand, was 1.59. The results identified multiple parameters that can influence the shear strength of the mixture. The parameters were: normal stress, particulate rubber content, mixture gradation, mixture size ratio, and the mixture’s density. The inclusion of particulate rubber tosand showed a decrease to the internal friction angle and an increase to the apparent cohesion. Overall, the inclusion of particulate rubber did not have a significant influenceon the shear strength of the mixture. For all the dense states at the low normal stresses 33 and 55 kPa, the inclusion of particulate rubber showed aslight increase in the shear strength where the peak was at 20% rubber content of the sand’s dry weight. On the other hand, at the high normal stresses 105, and 200 kPa, there was a slight decrease in the shear strength.

Keywords: shear strength, direct shear, sand-rubber mixture, waste material, granular material

Procedia PDF Downloads 131
3317 Metallic-Diamond Tools with Increased Abrasive Wear Resistance for Grinding Industrial Floor Systems

Authors: Elżbieta Cygan, Bączek, Piotr Wyżga

Abstract:

This paper presents the results of research on the physical, mechanical, and tribological properties of materials constituting the matrix in sintered metallic-diamond tools. The ground powders based on the Fe-Mn-Cu-Sn-C system were modified with micro-sized particles of the ceramic phase: SiC, Al₂O₃ and consolidated using the SPS (spark plasma sintering) method to a relative density of over 98% at 850-950°C, at a pressure of 35 MPa and time 10 min. After sintering, an analysis of the microstructure was conducted using scanning electron microscopy. The resulting materials were tested for the apparent density determined by Archimedes’ method, Rockwell hardness (scale B), Young’s modulus, as well as for technological properties. The performance results of obtained diamond composites were compared with the base material (Fe–Mn–Cu–Sn–C) and the commercial alloy Co-20% WC. The hardness of composites has achieved the maximum at a temperature of 900°C; therefore, it should be considered that at this temperature it was obtained optimal physical and mechanical properties of the subjects' composites were. Research on tribological properties showed that the composites modified with micro-sized particles of the ceramic phase are characterized by more than twice higher wear resistance in comparison with base materials and the commercial alloy Co-20% WC. Composites containing Al₂O₃ phase particles in the matrix material were composites containing Al₂O₃ phase particles in the matrix material were characterized by the lowest abrasion wear resistance. The manufacturing technology presented in the paper is economically justified and can be successfully used in the production process of the matrix in sintered diamond-impregnated tools used for the machining of an industrial floor system. Acknowledgment: The study was performed under LIDER IX Research Project No. LIDER/22/0085/L-9/17/NCBR/2018 entitled “Innovative metal-diamond tools without the addition of critical raw materials for applications in the process of grinding industrial floor systems” funded by the National Centre for Research and Development of Poland, Warsaw.

Keywords: abrasive wear resistance, metal matrix composites, sintered diamond tools, Spark Plasma Sintering

Procedia PDF Downloads 75
3316 Assessment of Seeding and Weeding Field Robot Performance

Authors: Victor Bloch, Eerikki Kaila, Reetta Palva

Abstract:

Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.

Keywords: agricultural robot, field robot, plant detection, robot performance

Procedia PDF Downloads 86
3315 Review on Quaternion Gradient Operator with Marginal and Vector Approaches for Colour Edge Detection

Authors: Nadia Ben Youssef, Aicha Bouzid

Abstract:

Gradient estimation is one of the most fundamental tasks in the field of image processing in general, and more particularly for color images since that the research in color image gradient remains limited. The widely used gradient method is Di Zenzo’s gradient operator, which is based on the measure of squared local contrast of color images. The proposed gradient mechanism, presented in this paper, is based on the principle of the Di Zenzo’s approach using quaternion representation. This edge detector is compared to a marginal approach based on multiscale product of wavelet transform and another vector approach based on quaternion convolution and vector gradient approach. The experimental results indicate that the proposed color gradient operator outperforms marginal approach, however, it is less efficient then the second vector approach.

Keywords: gradient, edge detection, color image, quaternion

Procedia PDF Downloads 233
3314 Electrical Tortuosity across Electrokinetically Remediated Soils

Authors: Waddah S. Abdullah, Khaled F. Al-Omari

Abstract:

Electrokinetic remediation is one of the most influential and effective methods to decontaminate contaminated soils. Electroosmosis and electromigration are the processes of electrochemical extraction of contaminants from soils. The driving force that causes removing contaminants from soils (electroosmosis process or electromigration process) is voltage gradient. Therefore, the electric field distribution throughout the soil domain is extremely important to investigate and to determine the factors that help to establish a uniform electric field distribution in order to make the clean-up process work properly and efficiently. In this study, small-sized passive electrodes (made of graphite) were placed at predetermined locations within the soil specimen, and the voltage drop between these passive electrodes was measured in order to observe the electrical distribution throughout the tested soil specimens. The electrokinetic test was conducted on two types of soils; a sandy soil and a clayey soil. The electrical distribution throughout the soil domain was conducted with different tests properties; and the electrical field distribution was observed in three-dimensional pattern in order to establish the electrical distribution within the soil domain. The effects of density, applied voltages, and degree of saturation on the electrical distribution within the remediated soil were investigated. The distribution of the moisture content, concentration of the sodium ions, and the concentration of the calcium ions were determined and established in three-dimensional scheme. The study has shown that the electrical conductivity within soil domain depends on the moisture content and concentration of electrolytes present in the pore fluid. The distribution of the electrical field in the saturated soil was found not be affected by its density. The study has also shown that high voltage gradient leads to non-uniform electric field distribution within the electroremediated soil. Very importantly, it was found that even when the electric field distribution is uniform globally (i.e. between the passive electrodes), local non-uniformity could be established within the remediated soil mass. Cracks or air gaps formed due to temperature rise (because of electric flow in low conductivity regions) promotes electrical tortuosity. Thus, fracturing or cracking formed in the remediated soil mass causes disconnection of electric current and hence, no removal of contaminant occur within these areas.

Keywords: contaminant removal, electrical tortuousity, electromigration, electroosmosis, voltage distribution

Procedia PDF Downloads 417
3313 Observation of Inverse Blech Length Effect during Electromigration of Cu Thin Film

Authors: Nalla Somaiah, Praveen Kumar

Abstract:

Scaling of transistors and, hence, interconnects is very important for the enhanced performance of microelectronic devices. Scaling of devices creates significant complexity, especially in the multilevel interconnect architectures, wherein current crowding occurs at the corners of interconnects. Such a current crowding creates hot-spots at the respective corners, resulting in non-uniform temperature distribution in the interconnect as well. This non-uniform temperature distribution, which is exuberated with continued scaling of devices, creates a temperature gradient in the interconnect. In particular, the increased current density at corners and the associated temperature rise due to Joule heating accelerate the electromigration induced failures in interconnects, especially at corners. This has been the classic reliability issue associated with metallic interconnects. Herein, it is generally understood that electromigration induced damages can be avoided if the length of interconnect is smaller than a critical length, often termed as Blech length. Interestingly, the effect of non-negligible temperature gradients generated at these corners in terms of thermomigration and electromigration-thermomigration coupling has not attracted enough attention. Accordingly, in this work, the interplay between the electromigration and temperature gradient induced mass transport was studied using standard Blech structure. In this particular sample structure, the majority of the current is forcefully directed into the low resistivity metallic film from a high resistivity underlayer film, resulting in current crowding at the edges of the metallic film. In this study, 150 nm thick Cu metallic film was deposited on 30 nm thick W underlayer film in the configuration of Blech structure. Series of Cu thin strips, with lengths of 10, 20, 50, 100, 150 and 200 μm, were fabricated. Current density of ≈ 4 × 1010 A/m² was passed through Cu and W films at a temperature of 250ºC. Herein, along with expected forward migration of Cu atoms from the cathode to the anode at the cathode end of the Cu film, backward migration from the anode towards the center of Cu film was also observed. Interestingly, smaller length samples consistently showed enhanced migration at the cathode end, thus indicating the existence of inverse Blech length effect in presence of temperature gradient. A finite element based model showing the interplay between electromigration and thermomigration driving forces has been developed to explain this observation.

Keywords: Blech structure, electromigration, temperature gradient, thin films

Procedia PDF Downloads 253
3312 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment

Authors: Loyd R. Hook, Maryam Moharek

Abstract:

With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.

Keywords: strategic planning, autonomous, aircraft, deconfliction

Procedia PDF Downloads 94
3311 An Unified Model for Longshore Sediment Transport Rate Estimation

Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza

Abstract:

Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.

Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone

Procedia PDF Downloads 385
3310 Flow Field Analysis of a Liquid Ejector Pump Using Embedded Large Eddy Simulation Methodology

Authors: Qasim Zaheer, Jehanzeb Masud

Abstract:

The understanding of entrainment and mixing phenomenon in the ejector pump is of pivotal importance for designing and performance estimation. In this paper, the existence of turbulent vortical structures due to Kelvin-Helmholtz instability at the free surface between the motive and the entrained fluids streams are simulated using Embedded LES methodology. The efficacy of Embedded LES for simulation of complex flow field of ejector pump is evaluated using ANSYS Fluent®. The enhanced mixing and entrainment process due to breaking down of larger eddies into smaller ones as a consequence of Vortex Stretching phenomenon is captured in this study. Moreover, the flow field characteristics of ejector pump like pressure velocity fields and mass flow rates are analyzed and validated against the experimental results.

Keywords: Kelvin Helmholtz instability, embedded LES, complex flow field, ejector pump

Procedia PDF Downloads 296
3309 Modelling High-Frequency Crude Oil Dynamics Using Affine and Non-Affine Jump-Diffusion Models

Authors: Katja Ignatieva, Patrick Wong

Abstract:

We investigated the dynamics of high frequency energy prices, including crude oil and electricity prices. The returns of underlying quantities are modelled using various parametric models such as stochastic framework with jumps and stochastic volatility (SVCJ) as well as non-parametric alternatives, which are purely data driven and do not require specification of the drift or the diffusion coefficient function. Using different statistical criteria, we investigate the performance of considered parametric and nonparametric models in their ability to forecast price series and volatilities. Our models incorporate possible seasonalities in the underlying dynamics and utilise advanced estimation techniques for the dynamics of energy prices.

Keywords: stochastic volatility, affine jump-diffusion models, high frequency data, model specification, markov chain monte carlo

Procedia PDF Downloads 103
3308 Direct Transient Stability Assessment of Stressed Power Systems

Authors: E. Popov, N. Yorino, Y. Zoka, Y. Sasaki, H. Sugihara

Abstract:

This paper discusses the performance of critical trajectory method (CTrj) for power system transient stability analysis under various loading settings and heavy fault condition. The method obtains Controlling Unstable Equilibrium Point (CUEP) which is essential for estimation of power system stability margins. The CUEP is computed by applying the CTrjto the boundary controlling unstable equilibrium point (BCU) method. The Proposed method computes a trajectory on the stability boundary that starts from the exit point and reaches CUEP under certain assumptions. The robustness and effectiveness of the method are demonstrated via six power system models and five loading conditions. As benchmark is used conventional simulation method whereas the performance is compared with and BCU Shadowing method.

Keywords: power system, transient stability, critical trajectory method, energy function method

Procedia PDF Downloads 384
3307 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 192