Search results for: point estimate method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23852

Search results for: point estimate method

23702 Estimation of Leachate Generation from Municipal Solid Waste Landfills in Selangor

Authors: Tengku Nilam Baizura, Noor Zalina Mahmood

Abstract:

In Malaysia, landfilling is the most preferred method and most of it does not have the proper leachate treatment system which can cause environmental problems. Leachate is the major factor to river water pollution since most landfills are located near the river which is the main water resource for the country. The study aimed to estimate leachate production from landfills in Selangor. A simple mathematical modelling was used for the calculation of annual leachate volume. The estimate of identified landfill area (A) using Google Earth was multiplied by the annual rainfall (R). The product is expressed as volume (V). The data indicate that the leachate production is high even it is fully closed. It is important to design the efficient landfill and proper leachate treatment processes especially for the old/closed landfill. Extensive monitoring will be required to predict future impact.

Keywords: landfill, leachate, municipal solid waste management, waste disposal

Procedia PDF Downloads 370
23701 Full-Spectrum Photo-thermal Conversion of Point-mode Cu₂O/TiN Plasmonic Nanofluids

Authors: Xiaoxiao Yu, Guodu He, Zihua Wu, Yuanyuan Wang, Huaqing Xie

Abstract:

Core-shell composite structure is a common method to regulate the spectral absorption of nanofluids, but there occur complex preparation processes, which limit the applications in some fields, such as photothermal utilization and catalysis. This work proposed point-mode Cu₂O/TiN plasmonic nanofluids to regulate the spectral capturing ability and simplify the preparation process. Non-noble TiN nanoparticles with the localized surface plasmon resonance effect are dispersed in Cu₂O nanoparticles for forming a multi-point resonance source to enhance the spectral absorption performance. The experimental results indicate that the multiple resonance effect of TiN effectively improves the optical absorption and expands the absorption region. When the radius of Cu₂O nanoparticles is equal to 150nm, the optical absorption of point-mode Cu₂O/TiN plasmonic nanoparticles is best. Moreover, the photothermal conversion efficiency of Cu₂O/TiN plasmonic nanofluid can reach 97.5% at a volume fraction of 0.015% and an optical depth of 10mm. The point-mode nanostructure effectively enhances the optical absorption properties and greatly simplifies the preparation process of the composite nanoparticles, which can promote the application of multi-component photonic nanoparticles in the field of solar energy.

Keywords: solar energy, nanofluid, point-mode structure, Cu₂O/TiN, localized surface plasmon resonance effect

Procedia PDF Downloads 61
23700 Estimating of Groundwater Recharge Value for Al-Najaf City, Iraq

Authors: Hayder H. Kareem

Abstract:

Groundwater recharge is a crucial parameter for any groundwater management system. The variability of the recharge rates and the difficulty in estimating this factor in many processes by direct observation leads to the complexity of estimating the recharge value. Various methods are existing to estimate the groundwater recharge, with some limitations for each method to be able for application. This paper focuses particularly on a real study area, Al-Najaf City, Iraq. In this city, there are few groundwater aquifers, but the aquifer which is considered in this study is the closest one to the ground surface, the Dibdibba aquifer. According to the Aridity Index, which is estimated in the paper, Al-Najaf City is classified as a region located in an arid climate, and this identified that the most appropriate method to estimate the groundwater recharge is Thornthwaite's formula or Thornthwaite's method. From the calculations, the estimated average groundwater recharge over the period 1980-2014 for Al-Najaf City is 40.32 mm/year. Groundwater recharge is completely affected the groundwater table level (groundwater head). Therefore, to make sure that this value of recharge is true, the MODFLOW program has been used to apply this value through finding the relationship between the calculated and observed heads where a groundwater model for the Al-Najaf City study area has been built by MODFLOW to simulate this area for different purposes, one of these purposes is to simulate the groundwater recharge. MODFLOW results show that this value of groundwater recharge is extremely high and needs to be reduced. Therefore, a further sensitivity test has been carried out for the Al-Najaf City study area by the MODFLOW program through changing the recharge value and found that the best estimation of groundwater recharge value for this city is 16.5 mm/year where this value gives the best fitting between the calculated and observed heads with minimum values of RMSE % (13.175) and RSS m² (1454).

Keywords: Al-Najaf City, groundwater modelling, recharge estimation, visual MODFLOW

Procedia PDF Downloads 135
23699 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 307
23698 Comparative Study of IC and Perturb and Observe Method of MPPT Algorithm for Grid Connected PV Module

Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati

Abstract:

The purpose of this paper is to study and compare two maximum power point tracking (MPPT) algorithms in a photovoltaic simulation system and also show a simulation study of maximum power point tracking (MPPT) for photovoltaic systems using perturb and observe algorithm and Incremental conductance algorithm. Maximum power point tracking (MPPT) plays an important role in photovoltaic systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency and minimize the overall system cost. Since the maximum power point (MPP) varies, based on the irradiation and cell temperature, appropriate algorithms must be utilized to track the (MPP) and maintain the operation of the system in it. MATLAB/Simulink is used to establish a model of photovoltaic system with (MPPT) function. This system is developed by combining the models established of solar PV module and DC-DC Boost converter. The system is simulated under different climate conditions. Simulation results show that the photovoltaic simulation system can track the maximum power point accurately.

Keywords: incremental conductance algorithm, perturb and observe algorithm, photovoltaic system, simulation results

Procedia PDF Downloads 556
23697 A Comparison between Russian and Western Approach for Deep Foundation Design

Authors: Saeed Delara, Kendra MacKay

Abstract:

Varying methodologies are considered for pile design for both Russian and Western approaches. Although both approaches rely on toe and side frictional resistances, different calculation methods are proposed to estimate pile capacity. The Western approach relies on compactness (internal friction angle) of soil for cohesionless soils and undrained shear strength for cohesive soils. The Russian approach relies on grain size for cohesionless soils and liquidity index for cohesive soils. Though most recommended methods in the Western approaches are relatively simple methods to predict pile settlement, the Russian approach provides a detailed method to estimate single pile and pile group settlement. Details to calculate pile axial capacity and settlement using the Russian and Western approaches are discussed and compared against field test results.

Keywords: pile capacity, pile settlement, Russian approach, western approach

Procedia PDF Downloads 166
23696 Burden of Dengue in Northern India

Authors: Ashutosh Biswas, Poonam Coushic, Kalpana Baruah, Paras Singla, A. C. Dhariwal, Pawana Murthy

Abstract:

Burden of Dengue in Northern India Ashutosh Biswas, Poonam Coushic, Kalpana Baruah, Paras Singla, AC Dhariwal, Pawana Murthy. All India Institute of Medical Sciences, NVBDCP,WHO New Delhi, India Aim: This study was conducted to estimate the burden of dengue in capital region of India. Methodology:Seropositivity of Dengue for IgM Ab, NS1 Ag and IgG Ab were performed among the blood donors’ samples from blood bank, those who were coming to donate blood for the requirement of blood for the admitted patients in hospital. Blood samplles were collected through out the year to estimate seroprevalance of dengue with or without outbreak season. All the subjects were asymptomatic at the time of blood donation. Results: A total of 1558 donors were screened for the study. On the basis of inclusion/ exclusion criteria, we enrolled 1531subjects for the study.Twenty seven donors were excluded from the study, out of which 6 were detected HIV +ve, 11 were positive for HBsAg and 10 were found positive for HCV.Mean age was 30.51 ± 7.75 years.Of 1531subjects, 18 (1.18%) had a past history of typhoid fever, 28 (1.83%) had chikungunya fever, 9 (0.59%) had malaria and 43 subjects (2.81%) had a past history of symptomatic dengue infection.About 2.22% (34) of subjects were found to have sero-positive for NS1 Ag with a peak point prevalence of 7.14% in the month of October and sero-positive of IgM Ab was observed about 5.49% (84)with a peak point prevalence of 14.29% in the month of October. Sero-prevalnce of IgGwas detected in about 64.21% (983) of subjects. Conclusion: Acute asymptomatic dengue (NS1 Ag+ve) was observed in 7.14%, as the subjects were having no symptoms at the time of sampling. This group of subjects poses a potential public health threat for transmitting dengue infection through blood transfusion (TTI) in the community as evident by presence of active viral infection due to NS1Ag +VE. Therefore a policy may be implemented in the blood bank for testing NS1 Ag to look for active dengue infection for preventing dengue transmission through blood transfusion (TTI). Acute or Subacute dengue infection ( IgM Ab+ve) was observed from 5.49% to 14.29% which is a peak point prevalence in the month of October. About 64.21% of the population were immunized by natural dengue infection ( IgG Ab+ve) in theNorthern province of India. This might be helpful for implementing the dengue vaccine in a region. Blood samples in blood banks should be tested for dengue before transfusion to any other person to prevent transfusion transmitted dengue infection as we estimated upto 7.14% positivity of NS1 Ag in our study which indicates presence of dengue virus in blood donors’ samples.

Keywords: Dengue Burden, Seroprevalance, Asymptomatic dengue, Dengue transmission through blood transfusion

Procedia PDF Downloads 149
23695 Statistical Data Analysis of Migration Impact on the Spread of HIV Epidemic Model Using Markov Monte Carlo Method

Authors: Ofosuhene O. Apenteng, Noor Azina Ismail

Abstract:

Over the last several years, concern has developed over how to minimize the spread of HIV/AIDS epidemic in many countries. AIDS epidemic has tremendously stimulated the development of mathematical models of infectious diseases. The transmission dynamics of HIV infection that eventually developed AIDS has taken a pivotal role of much on building mathematical models. From the initial HIV and AIDS models introduced in the 80s, various improvements have been taken into account as how to model HIV/AIDS frameworks. In this paper, we present the impact of migration on the spread of HIV/AIDS. Epidemic model is considered by a system of nonlinear differential equations to supplement the statistical method approach. The model is calibrated using HIV incidence data from Malaysia between 1986 and 2011. Bayesian inference based on Markov Chain Monte Carlo is used to validate the model by fitting it to the data and to estimate the unknown parameters for the model. The results suggest that the migrants stay for a long time contributes to the spread of HIV. The model also indicates that susceptible individual becomes infected and moved to HIV compartment at a rate that is more significant than the removal rate from HIV compartment to AIDS compartment. The disease-free steady state is unstable since the basic reproduction number is 1.627309. This is a big concern and not a good indicator from the public heath point of view since the aim is to stabilize the epidemic at the disease equilibrium.

Keywords: epidemic model, HIV, MCMC, parameter estimation

Procedia PDF Downloads 600
23694 Polysaccharides as Pour Point Depressants

Authors: Ali M. EL-Soll

Abstract:

Physical properties of Sarir waxy crude oil was investigated, pour-point was determined using ASTM D-79 procedure, paraffin content and carbon number distribution of the paraffin was determined using gas liquid Chromatography(GLC), polymeric additives were prepared and their structures were confirmed using IR spectrophotometer. The molecular weight and molecular weigh distribution of these additives were determined by gel permeation chromatography (GPC). the performance of the synthesized additives as pour-point depressants was evaluated, for the mentioned crude oil.

Keywords: sarir, waxy, crude, pour point, depressants

Procedia PDF Downloads 452
23693 Topochemical Synthesis of Epitaxial Silicon Carbide on Silicon

Authors: Andrey V. Osipov, Sergey A. Kukushkin, Andrey V. Luk’yanov

Abstract:

A method is developed for the solid-phase synthesis of epitaxial layers when the substrate itself is involved into a topochemical reaction and the reaction product grows in the interior of substrate layer. It opens up new possibilities for the relaxation of the elastic energy due to the attraction of point defects formed during the topochemical reaction in anisotropic media. The presented method of silicon carbide (SiC) formation employs a topochemical reaction between the single-crystalline silicon (Si) substrate and gaseous carbon monoxide (CO). The corresponding theory of interaction of point dilatation centers in anisotropic crystals is developed. It is eliminated that the most advantageous location of the point defects is the direction (111) in crystals with cubic symmetry. The single-crystal SiC films with the thickness up to 200 nm have been grown on Si (111) substrates owing to the topochemical reaction with CO. Grown high-quality single-crystal SiC films do not contain misfit dislocations despite the huge lattice mismatch value of ~20%. Also the possibility of growing of thick wide-gap semiconductor films on these templates SiC/Si(111) and, accordingly, its integration into Si electronics, is demonstrated. Finally, the ab initio theory of SiC formation due to the topochemical reaction has been developed.

Keywords: epitaxy, silicon carbide, topochemical reaction, wide-bandgap semiconductors

Procedia PDF Downloads 458
23692 Mixtures of Length-Biased Weibull Distributions for Loss Severity Modelling

Authors: Taehan Bae

Abstract:

In this paper, a class of length-biased Weibull mixtures is presented to model loss severity data. The proposed model generalizes the Erlang mixtures with the common scale parameter, and it shares many important modelling features, such as flexibility to fit various data distribution shapes and weak-denseness in the class of positive continuous distributions, with the Erlang mixtures. We show that the asymptotic tail estimate of the length-biased Weibull mixture is Weibull-type, which makes the model effective to fit loss severity data with heavy-tailed observations. A method of statistical estimation is discussed with applications on real catastrophic loss data sets.

Keywords: Erlang mixture, length-biased distribution, transformed gamma distribution, asymptotic tail estimate, EM algorithm, expectation-maximization algorithm

Procedia PDF Downloads 224
23691 Localization of Mobile Robots with Omnidirectional Cameras

Authors: Tatsuya Kato, Masanobu Nagata, Hidetoshi Nakashima, Kazunori Matsuo

Abstract:

Localization of mobile robots are important tasks for developing autonomous mobile robots. This paper proposes a method to estimate positions of a mobile robot using an omnidirectional camera on the robot. Landmarks for points of references are set up on a field where the robot works. The omnidirectional camera which can obtain 360 [deg] around images takes photographs of these landmarks. The positions of the robots are estimated from directions of these landmarks that are extracted from the images by image processing. This method can obtain the robot positions without accumulative position errors. Accuracy of the estimated robot positions by the proposed method are evaluated through some experiments. The results show that it can obtain the positions with small standard deviations. Therefore the method has possibilities of more accurate localization by tuning of appropriate offset parameters.

Keywords: mobile robots, localization, omnidirectional camera, estimating positions

Procedia PDF Downloads 442
23690 Assessment of an ICA-Based Method for Detecting the Effect of Attention in the Auditory Late Response

Authors: Siavash Mirahmadizoghi, Steven Bell, David Simpson

Abstract:

In this work a new independent component analysis (ICA) based method for noise reduction in evoked potentials is evaluated on for auditory late responses (ALR) captured with a 63-channel electroencephalogram (EEG) from 10 normal-hearing subjects. The performance of the new method is compared with a single channel alternative in terms of signal to noise ratio (SNR), the number of channels with an SNR above an empirically derived statistical critical value and an estimate of the effect of attention on the major components in the ALR waveform. The results show that the multichannel signal processing method can significantly enhance the quality of the ALR signal and also detect the effect of the attention on the ALR better than the single channel alternative.

Keywords: auditory late response (ALR), attention, EEG, independent component analysis (ICA), multichannel signal processing

Procedia PDF Downloads 505
23689 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses

Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim

Abstract:

A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.

Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame

Procedia PDF Downloads 422
23688 Quartic Spline Method for Numerical Solution of Self-Adjoint Singularly Perturbed Boundary Value Problems

Authors: Reza Mohammadi

Abstract:

Using quartic spline, we develop a method for numerical solution of singularly perturbed two-point boundary-value problems. The purposed method is fourth-order accurate and applicable to problems both in singular and non-singular cases. The convergence analysis of the method is given. The resulting linear system of equations has been solved by using a tri-diagonal solver. We applied the presented method to test problems which have been solved by other existing methods in references, for comparison of presented method with the existing methods. Numerical results are given to illustrate the efficiency of our methods.

Keywords: second-order ordinary differential equation, singularly-perturbed, quartic spline, convergence analysis

Procedia PDF Downloads 360
23687 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 538
23686 Determination of Mercury in Gold Ores by CVAAS Method

Authors: Ratna Siti Khodijah, Mirzam Abdurrachman

Abstract:

Gold is recovered from gold ores. Within the ores, there are not only gold but also several types of precious metals. Copper, silver, and platinum group elements (ruthenium, rhodium, palladium, rhenium, osmium, and iridium) are metals commonly found in the ores. These metals combine to form an ore because they have the same properties. It is due to their position in periodic-system-of-elements are near to gold. However, the presence of mercury in every gold ore has not been mentioned, even though it is located right next to gold in the periodic-system-of-elements and they are located in the same block, d-block. Thus, it is possible that mercury is contained in the ores. Moreover, the elements of the same group with mercury—zinc and cadmium—sometimes can be found in the ores. It is suspected that mercury can not be detected because the processing of gold ores usually using fire assay method. Before the ores melting, mercury would evaporate because it has the lowest boiling point of all precious metal in the ores. Therefore, it suggested doing research on the presence of mercury in gold ores by CVAAS method. The results of this study would obtain the amount of mercury in gold ores that should be purified. So it can be produced economically if possible.

Keywords: boiling point, d-block, fire assay, precious metal

Procedia PDF Downloads 341
23685 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method

Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent

Abstract:

A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.

Keywords: bed topography, FBM, LBM, shallow water, simulations

Procedia PDF Downloads 98
23684 MHD Stagnation-Point Flow over a Plate

Authors: H. Niranjan, S. Sivasankaran

Abstract:

Heat and mass transfer near a steady stagnation point boundary layer flow of viscous incompressible fluid through porous media investigates along a vertical plate is thoroughly studied under the presence of magneto hydrodynamic (MHD) effects. The fluid flow is steady, laminar, incompressible and in two-dimensional. The nonlinear differential coupled parabolic partial differential equations of continuity, momentum, energy and specie diffusion are converted into the non-similar boundary layer equations using similarity transformation, which are then solved numerically using the Runge-Kutta method along with shooting method. The effects of the conjugate heat transfer parameter, the porous medium parameter, the permeability parameter, the mixed convection parameter, the magnetic parameter, and the thermal radiation on the velocity and temperature profiles as well as on the local skin friction and local heat transfer are presented and analyzed. The validity of the methodology and analysis is checked by comparing the results obtained for some specific cases with those available in the literature. The various parameters on local skin friction, heat and mass transfer rates are presented in tabular form.

Keywords: MHD, porous medium, slip, convective boundary condition, stagnation point

Procedia PDF Downloads 302
23683 Discussion on Dispersion Curves of Non-penetrable Soils from in-Situ Seismic Dilatometer Measurements

Authors: Angelo Aloisio Dag, Pasquale Pasca, Massimo Fragiacomo, Ferdinando Totani, Gianfranco Totani

Abstract:

The estimate of the velocity of shear waves (Vs) is essential in seismic engineering to characterize the dynamic response of soils. There are various direct methods to estimate the Vs. The authors report the results of site characterization in Macerata, where they measured the Vs using the seismic dilatometer in a 100m deep borehole. The standard Vs estimation originates from the cross-correlation between the signals acquired by two geophones at increasing depths. This paper focuses on the estimate of the dependence of Vs on the wavenumber. The dispersion curves reveal an unexpected hyperbolic dispersion curve typical of Lamb waves. Interestingly, the contribution of Lamb waves may be notable up to 100m depth. The amplitude of surface waves decrease rapidly with depth: still, their influence may be essential up to depths considered unusual for standard geotechnical investigations, where their effect is generally neglected. Accordingly, these waves may bias the outcomes of the standard Vs estimations, which ignore frequency-dependent phenomena. The paper proposes an enhancement of the accepted procedure to estimate Vs and addresses the importance of Lamb waves in soil characterization.

Keywords: dispersion curve, seismic dilatometer, shear wave, soil mechanics

Procedia PDF Downloads 172
23682 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System

Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu

Abstract:

The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.

Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter

Procedia PDF Downloads 252
23681 Reliability and Validity for Measurement of Body Composition: A Field Method

Authors: Ahmad Hashim, Zarizi Ab Rahman

Abstract:

Measurement of body composition via a field method has the most popular instruments which are used to estimate the percentage of body fat. Among the instruments used are the Body Mass Index, Bio Impedance Analysis and Skinfold Test. All three of these instruments do not involve high costs, do not require high technical skills, are mobile, save time, and are suitable for use in large populations. Because all three instruments can estimate the percentage of body fat, but it is important to identify the most appropriate instruments and have high reliability. Hence, this study was conducted to determine the reliability and convergent validity of the instruments. A total of 40 students, males and females aged between 13 and 14 years participated in this study. The study found that the test retest and Pearson correlation coefficient of reliability for the three instruments is very high, r = .99. While the inter class reliability also are at high level with r = .99 for Body Mass Index and Bio Impedance Analysis, r = .96 for Skin fold test. Intra class reliability coefficient for these three instruments is too high for Body Mass Index r = .99, Bio Impedance Analysis r = .97, and Skin fold Test r = .90. However, Standard Error of Measurement value for all three instruments indicates the Body Mass Index is the most appropriate instrument with a mean value of .000672 compared with other instruments. The findings show that the Body Mass Index is an instrument which is the most accurate and reliable in estimating body fat percentage for the population studied.

Keywords: reliability, validity, body mass index, bio impedance analysis and skinfold test

Procedia PDF Downloads 333
23680 Analysis of Potential Flow around Two-Dimensional Body by Surface Panel Method and Vortex Lattice Method

Authors: M. Abir Hossain, M. Shahjada Tarafder

Abstract:

This paper deals with the analysis of potential flow past two-dimensional body by discretizing the body into panels where the Laplace equation was applied to each panel. The Laplace equation was solved at each panel by applying the boundary conditions. The boundary condition was applied at each panel to mathematically formulate the problem and then convert the problem into a computer-solvable problem. Kutta condition was applied at both the leading and trailing edges to see whether the condition is satisfied or not. Another approach that is applied for the analysis is Vortex Lattice Method (VLM). A vortex ring is considered at each control point. Using the Biot-Savart Law the strength at each control point is calculated and hence the pressure differentials are measured. For the comparison of the analytic result with the experimental result, different NACA section hydrofoil is used. The analytic result of NACA 0012 and NACA 0015 are compared with the experimental result of Abbott and Doenhoff and found significant conformity with the achieved result.

Keywords: Kutta condition, Law of Biot-Savart, pressure differentials, potential flow, vortex lattice method

Procedia PDF Downloads 190
23679 A Grid Synchronization Phase Locked Loop Method for Grid-Connected Inverters Systems

Authors: Naima Ikken, Abdelhadi Bouknadel, Nour-eddine Tariba Ahmed Haddou, Hafsa El Omari

Abstract:

The operation of grid-connected inverters necessity a single-phase phase locked loop (PLL) is proposed in this article to accurately and quickly estimate and detect the grid phase angle. This article presents the improvement of a method of phase-locked loop. The novelty is to generate a method (PLL) of synchronizing the grid with a Notch filter based on adaptive fuzzy logic for inverter systems connected to the grid. The performance of the proposed method was tested under normal and abnormal operating conditions (amplitude, frequency and phase shift variations). In addition, simulation results with ISPM software are developed to verify the effectiveness of the proposed method strategy. Finally, the experimental test will be used to extract the result and discuss the validity of the proposed algorithm.

Keywords: phase locked loop, PLL, notch filter, fuzzy logic control, grid connected inverters

Procedia PDF Downloads 148
23678 Deterministic Modelling to Estimate Economic Impact from Implementation and Management of Large Infrastructure

Authors: Dimitrios J. Dimitriou

Abstract:

It is widely recognised that the assets portfolio development is helping to enhance economic growth, productivity and competitiveness. While numerous studies and reports certify the positive effect of investments in large infrastructure investments on the local economy, still, the methodology to estimate the contribution in economic development is a challenging issue for researchers and economists. The key question is how to estimate those economic impacts in each economic system. This paper provides a compact and applicable methodological framework providing quantitative results in terms of the overall jobs and income generated into the project life cycle. According to a deterministic mathematical approach, the key variables and the modelling framework are presented. The numerical case study highlights key results for a new motorway project in Greece, which is experienced economic stress for many years, providing the opportunity for comparisons with similar cases.

Keywords: quantitative modelling, economic impact, large transport infrastructure, economic assessment

Procedia PDF Downloads 203
23677 Reservoir Properties Effect on Estimating Initial Gas in Place Using Flowing Material Balance Method

Authors: Yousef S. Kh. S. Hashem

Abstract:

Accurate estimation of initial gas in place (IGIP) plays an important factor in the decision to develop a gas field. One of the methods that are available in the industry to estimate the IGIP is material balance. This method required that the well has to be shut-in while pressure is measured as it builds to average reservoir pressure. Since gas demand is high and shut-in well surveys are very expensive, flowing gas material balance (FGMB) is sometimes used instead of material balance. This work investigated the effect of reservoir properties (pressure, permeability, and reservoir size) on the estimation of IGIP when using FGMB. A gas reservoir simulator that accounts for friction loss, wellbore storage, and the non-Darcy effect was used to simulate 165 different possible causes (3 pressures, 5 reservoir sizes, and 11 permeabilities). Both tubing pressure and bottom-hole pressure were analyzed using FGMB. The results showed that the FGMB method is very sensitive for tied reservoirs (k < 10). Also, it showed which method is best to be used for different reservoir properties. This study can be used as a guideline for the application of the FGMB method.

Keywords: flowing material balance, gas reservoir, reserves, gas simulator

Procedia PDF Downloads 153
23676 Generalization of Zhou Fixed Point Theorem

Authors: Yu Lu

Abstract:

Fixed point theory is a basic tool for the study of the existence of Nash equilibria in game theory. This paper presents a significant generalization of the Veinott-Zhou fixed point theorem for increasing correspondences, which serves as an essential framework for investigating the existence of Nash equilibria in supermodular and quasisupermodular games. To establish our proofs, we explore different conceptions of multivalued increasingness and provide comprehensive results concerning the existence of the largest/least fixed point. We provide two distinct approaches to the proof, each offering unique insights and advantages. These advancements not only extend the applicability of the Veinott-Zhou theorem to a broader range of economic scenarios but also enhance the theoretical framework for analyzing equilibrium behavior in complex game-theoretic models. Our findings pave the way for future research in the development of more sophisticated models of economic behavior and strategic interaction.

Keywords: fixed-point, Tarski’s fixed-point theorem, Nash equilibrium, supermodular game

Procedia PDF Downloads 54
23675 Non-Convex Multi Objective Economic Dispatch Using Ramp Rate Biogeography Based Optimization

Authors: Susanta Kumar Gachhayat, S. K. Dash

Abstract:

Multi objective non-convex economic dispatch problems of a thermal power plant are of grave concern for deciding the cost of generation and reduction of emission level for diminishing the global warming level for improving green-house effect. This paper deals with ramp rate constraints for achieving better inequality constraints so as to incorporate valve point loading for cost of generation in thermal power plant through ramp rate biogeography based optimization involving mutation and migration. Through 50 out of 100 trials, the cost function and emission objective function were found to have outperformed other classical methods such as lambda iteration method, quadratic programming method and many heuristic methods like particle swarm optimization method, weight improved particle swarm optimization method, constriction factor based particle swarm optimization method, moderate random particle swarm optimization method etc. Ramp rate biogeography based optimization applications prove quite advantageous in solving non convex multi objective economic dispatch problems subjected to nonlinear loads that pollute the source giving rise to third harmonic distortions and other such disturbances.

Keywords: economic load dispatch, ELD, biogeography-based optimization, BBO, ramp rate biogeography-based optimization, RRBBO, valve-point loading, VPL

Procedia PDF Downloads 379
23674 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 165
23673 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data

Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez

Abstract:

Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.

Keywords: breast cancer, incidence, cancer registries, castilla-la mancha

Procedia PDF Downloads 311