Search results for: Maximum a Posterior Probability (MAP) estimator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2284

Search results for: Maximum a Posterior Probability (MAP) estimator

1804 Estimating Regression Effects in Com Poisson Generalized Linear Model

Authors: Vandna Jowaheer, Naushad A. Mamode Khan

Abstract:

Com Poisson distribution is capable of modeling the count responses irrespective of their mean variance relation and the parameters of this distribution when fitted to a simple cross sectional data can be efficiently estimated using maximum likelihood (ML) method. In the regression setup, however, ML estimation of the parameters of the Com Poisson based generalized linear model is computationally intensive. In this paper, we propose to use quasilikelihood (QL) approach to estimate the effect of the covariates on the Com Poisson counts and investigate the performance of this method with respect to the ML method. QL estimates are consistent and almost as efficient as ML estimates. The simulation studies show that the efficiency loss in the estimation of all the parameters using QL approach as compared to ML approach is quite negligible, whereas QL approach is lesser involving than ML approach.

Keywords: Com Poisson, Cross-sectional, Maximum Likelihood, Quasi likelihood

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
1803 Modeling and Design of MPPT Controller Using Stepped P&O Algorithm in Solar Photovoltaic System

Authors: R. Prakash, B. Meenakshipriya, R. Kumaravelan

Abstract:

This paper presents modeling and simulation of Grid Connected Photovoltaic (PV) system by using improved mathematical model. The model is used to study different parameter variations and effects on the PV array including operating temperature and solar irradiation level. In this paper stepped P&O algorithm is proposed for MPPT control. This algorithm will identify the suitable duty ratio in which the DC-DC converter should be operated to maximize the power output. Photo voltaic array with proposed stepped P&O-MPPT controller can operate in the maximum power point for the whole range of solar data (irradiance and temperature).

Keywords: Photovoltaic (PV), Maximum Power Point Tracking (MPPT), Boost converter, Stepped Perturb & Observe method (Stepped P&O).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3991
1802 The Estimate Rate of Permanent Flow of a Liquid Simulating Blood by Doppler Effect

Authors: Malika.D Kedir-Talha, Mohammed Mehenni

Abstract:

To improve the characterization of blood flows, we propose a method which makes it possible to use the spectral analysis of the Doppler signals. Our calculation induces a reasonable approximation, the error made on estimated speed reflects the fact that speed depends on the flow conditions as well as on measurement parameters like the bore and the volume flow rate. The estimate of the Doppler signal frequency enables us to determine the maximum Doppler frequencie Fd max as well as the maximum flow speed. The results show that the difference between the estimated frequencies ( Fde ) and the Doppler frequencies ( Fd ) is small, this variation tends to zero for important θ angles and it is proportional to the diameter D. The description of the speed of friction and the coefficient of friction justify the error rate obtained.

Keywords: Doppler frequency, Doppler spectrum, estimate speed, permanent flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1318
1801 Assessment of Nickel Concentration in Surface and Ground Water of the Kowsar Dam Basin

Authors: Fardin Boustani, M Hojati , S Ebrahimzadeh

Abstract:

The Kowsar dam supply water for different usages such as drinking, industrial, agricultural and aquaculture farms usages and located next to the city of Dehdashat in Kohgiluye and Boyerahmad province in southern Iran. There are some towns and villages on the Kowsar dam watersheds, which Dehdasht and Choram are the most important and populated cities in this area. The study was undertaken to assess the status of water quality in the urban areas of the Kowsar dam. A total of 28 water samples were collected from 6 stations on surface water and 1 station from groundwater on the watershed of the Kowsar dam. All the samples were analyzed for Ni concentration using standard procedures. The results were compared with other national and international standards. Among the analyzed samples, as the maximum value of Nickel (0.01 mg/L) was observed on the station 2 at the autumn 2010, all the samples analyzed were within the maximum admissible limits by the United States Environmental Protection Agency, EU, WHO and the Iranian. In general results of the present study have shown that a Ni mean value of station No. 2 with 0.006 mg/L is higher than the other stations. Ni level of all samples and stations have had normal values and this is an indication of pollution potential and hazards because of human activity and waste water of towns in the areas, which can effect on human health implications in future. This research, therefore, recommends the government and other responsible authorities to take suitable improving measures in the Kowsar dam watersheds.

Keywords: Kowsar dam, Drinking water quality, Nickel, Maximum admissible limit, World health organization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
1800 Skin Detection using Histogram depend on the Mean Shift Algorithm

Authors: Soo- Young Ye, Ki-Gon Nam, Ki-Won Byun

Abstract:

In this paper, we were introduces a skin detection method using a histogram approximation based on the mean shift algorithm. The proposed method applies the mean shift procedure to a histogram of a skin map of the input image, generated by comparison with standard skin colors in the CbCr color space, and divides the background from the skin region by selecting the maximum value according to brightness level. The proposed method detects the skin region using the mean shift procedure to determine a maximum value that becomes the dividing point, rather than using a manually selected threshold value, as in existing techniques. Even when skin color is contaminated by illumination, the procedure can accurately segment the skin region and the background region. The proposed method may be useful in detecting facial regions as a pretreatment for face recognition in various types of illumination.

Keywords: Skin region detection, mean shift, histogram approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2238
1799 A Method for 3D Mesh Adaptation in FEA

Authors: S. Sfarni, E. Bellenger, J. Fortin, M. Guessasma

Abstract:

The use of the mechanical simulation (in particular the finite element analysis) requires the management of assumptions in order to analyse a real complex system. In finite element analysis (FEA), two modeling steps require assumptions to be able to carry out the computations and to obtain some results: the building of the physical model and the building of the simulation model. The simplification assumptions made on the analysed system in these two steps can generate two kinds of errors: the physical modeling errors (mathematical model, domain simplifications, materials properties, boundary conditions and loads) and the mesh discretization errors. This paper proposes a mesh adaptive method based on the use of an h-adaptive scheme in combination with an error estimator in order to choose the mesh of the simulation model. This method allows us to choose the mesh of the simulation model in order to control the cost and the quality of the finite element analysis.

Keywords: Finite element, discretization errors, adaptivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
1798 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167
1797 Strain Based Evaluation of Dents in Pressurized Pipes

Authors: Maziar Ramezani, Thomas Neitzert

Abstract:

A dent is a gross distortion of the pipe cross-section. Dent depth is defined as the maximum reduction in the diameter of the pipe compared to the original diameter. Pipeline dent finite element (FE) simulation and theoretical analysis are conducted in this paper to develop an understanding of the geometric characteristics and strain distribution in the pressurized dented pipe. Based on the results, the magnitude of the denting force increases significantly with increasing the internal pressure, and the maximum circumferential and longitudinal strains increase by increasing the internal pressure and the dent depth. The results can be used for characterizing dents and ranking their risks to the integrity of a pipeline.

Keywords: dented steel pipelines, Finite element model, Internal pressure, Strain distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5461
1796 Investigation of the Effect of Fine-Grained and Its Plastic Properties on Liquefaction Resistance of Sand

Authors: S. A. Naeini, M. Mortezaee

Abstract:

The purpose of this paper is to investigate the effect of fine grain content in soil and its plastic properties on soil liquefaction potential. For this purpose, the conditions for considering the fine grains effect and percentage of plastic fine on the liquefaction resistance of saturated sand presented by researchers has been investigated. Then, some comprehensive results of all the issues raised by some researchers are stated. From these investigations it was observed that by increasing the percentage of cohesive fine grains in the sandy soil (up to 20%), the maximum shear strength decreases and by adding more fine- grained percentage, the maximum shear strength of the resulting soil increases but never reaches the amount of clean sand.

Keywords: Fine-grained, liquefaction, plasticity, shear strength, sand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
1795 Effects of Market Share and Diversification on Nonlife Insurers- Performance

Authors: M. Pervan, T. Pavic Kramaric

Abstract:

The aim of this paper is to investigate the influence of market share and diversification on the nonlife insurers- performance. The underlying relationships have been investigated in different industries and different disciplines (economics, management...), still, no consistency exists either in the magnitude or statistical significance of the relationship between market share (and diversification as well) on one side and companies- performance on the other side. Moreover, the direction of the relationship is also somewhat questionable. While some authors find this relationship to be positive, the others reveal its negative association. In order to test the influence of market share and diversification on companies- performance in Croatian nonlife insurance industry for the period from 1999 to 2009, we designed an empirical model in which we included the following independent variables: firms- profitability from previous years, market share, diversification and control variables (i.e. ownership, industrial concentration, GDP per capita, inflation). Using the two-step generalized method of moments (GMM) estimator we found evidence of a positive and statistically significant influence of both, market share and diversification, on insurers- profitability.

Keywords: Diversification, market share, nonlife insurance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
1794 Removal of Chromium from Aqueous Solution using Synthesized Polyaniline in Acetonitrile

Authors: Majid Riahi Samani, Seyed Mehdi Borghei

Abstract:

Absorptive characteristics of polyaniline synthesized in mixture of water and acetonitrile in 50/50 volume ratio was studied. Synthesized polyaniline in powder shape is used as an adsorbent to remove toxic hexavalent chromium from aqueous solutions. Experiments were conducted in batch mode with different variables such as agitation time, solution pH and initial concentration of hexavalent chromium. Removal mechanism is the combination of surface adsorption and reduction. The equilibrium time for removal of Cr(T) and Cr(VI) was about 2 and 10 minutes respectively. The optimum pH for total chromium removal occurred at pH 7 and maximum hexavalent chromium removal took place under acidic condition at pH 3. Investigating the isothermal characteristics showed that the equilibrium adsorption data fitted both Freundlich-s and Langmuir-s isotherms. The maximum adsorption of chromium was calculated 36.1 mg/g for polyaniline

Keywords: Polyaniline, Chromium, acetonitrile, Adsorption

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2201
1793 Solar Tracking System Using a Refrigerant as Working Medium for Solar Energy Conversion

Authors: S. Sendhil Kumar, S. N. Vijayan

Abstract:

Utilization of solar energy can be found in various domestic and industrial applications. The performance of any solar collector is largely affected by various parameters such as glazing, absorber plate, top covers, and heating pipes. Technology improvements have brought us another method for conversion of solar energy to direct electricity using solar photovoltaic system. Utilization and extraction of solar energy is the biggest problem in these conversion methods. This paper aims to overcome these problems and take the advantages of available energy from solar by maximizing the utilization through solar tracking system using a refrigerant as a working medium. The use of this tracking system can help increase the efficiency of conversion devices by maximum utilization of solar energy. The dual axis tracking system gives maximum energy output compared to single axis tracking system.

Keywords: Refrigerant, solar collector, solar energy, solar panel, solar tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
1792 Unmanned Aerial Vehicle Selection Using Fuzzy Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

The selection of an Unmanned Aerial Vehicle (UAV) involves complex decision-making due to the evaluation of numerous alternatives and criteria simultaneously. This process necessitates the consideration of various factors such as payload capacity, maximum speed, endurance, altitude, avionics systems, price, economic life, and maximum range. This study aims to determine the most suitable UAV by taking these criteria into account. To achieve this, the standard fuzzy set methodology is employed, enabling decision-makers to define linguistic terms as references. A practical numerical example is provided to demonstrate the applicability of the proposed approach. Through a successful application, a comparison of different UAVs is conducted, culminating in the selection of the most appropriate vehicle during the final stage.

Keywords: Standard fuzzy sets (SFSs), Unmanned Aerial Vehicle (UAV) selection, multiple criteria decision making, MCDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217
1791 Additional Considerations on a Sequential Life Testing Approach using a Weibull Model

Authors: D. I. De Souza, D. R. Fonseca, R. Rocha

Abstract:

In this paper we will develop further the sequential life test approach presented in a previous article by [1] using an underlying two parameter Weibull sampling distribution. The minimum life will be considered equal to zero. We will again provide rules for making one of the three possible decisions as each observation becomes available; that is: accept the null hypothesis H0; reject the null hypothesis H0; or obtain additional information by making another observation. The product being analyzed is a new type of a low alloy-high strength steel product. To estimate the shape and the scale parameters of the underlying Weibull model we will use a maximum likelihood approach for censored failure data. A new example will further develop the proposed sequential life testing approach.

Keywords: Sequential Life Testing, Underlying Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1368
1790 A Novel Multiresolution based Optimization Scheme for Robust Affine Parameter Estimation

Authors: J.Dinesh Peter

Abstract:

This paper describes a new method for affine parameter estimation between image sequences. Usually, the parameter estimation techniques can be done by least squares in a quadratic way. However, this technique can be sensitive to the presence of outliers. Therefore, parameter estimation techniques for various image processing applications are robust enough to withstand the influence of outliers. Progressively, some robust estimation functions demanding non-quadratic and perhaps non-convex potentials adopted from statistics literature have been used for solving these. Addressing the optimization of the error function in a factual framework for finding a global optimal solution, the minimization can begin with the convex estimator at the coarser level and gradually introduce nonconvexity i.e., from soft to hard redescending non-convex estimators when the iteration reaches finer level of multiresolution pyramid. Comparison has been made to find the performance of the results of proposed method with the results found individually using two different estimators.

Keywords: Image Processing, Affine parameter estimation, Outliers, Robust Statistics, Robust M-estimators

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437
1789 Linear Quadratic Gaussian/Loop Transfer Recover Control Flight Control on a Nonlinear Model

Authors: T. Sanches, K. Bousson

Abstract:

As part of the development of a 4D autopilot system for unmanned aerial vehicles (UAVs), i.e. a time-dependent robust trajectory generation and control algorithm, this work addresses the problem of optimal path control based on the flight sensors data output that may be unreliable due to noise on data acquisition and/or transmission under certain circumstances. Although several filtering methods, such as the Kalman-Bucy filter or the Linear Quadratic Gaussian/Loop Transfer Recover Control (LQG/LTR), are available, the utter complexity of the control system, together with the robustness and reliability required of such a system on a UAV for airworthiness certifiable autonomous flight, required the development of a proper robust filter for a nonlinear system, as a way of further mitigate errors propagation to the control system and improve its ,performance. As such, a nonlinear algorithm based upon the LQG/LTR, is validated through computational simulation testing, is proposed on this paper.

Keywords: Autonomous flight, LQG/LTR, nonlinear state estimator, robust flight control and stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668
1788 Decolourization of Melanoidin Containing Wastewater Using South African Coal Fly Ash

Authors: V.O. Ojijo, M.S. Onyango, Aoyi Ochieng, F.A.O. Otieno

Abstract:

Batch adsorption of recalcitrant melanoidin using the abundantly available coal fly ash was carried out. It had low specific surface area (SBET) of 1.7287 m2/g and pore volume of 0.002245 cm3/g while qualitative evaluation of the predominant phases in it was done by XRD analysis. Colour removal efficiency was found to be dependent on various factors studied. Maximum colour removal was achieved around pH 6, whereas increasing sorbent mass from 10g/L to 200 g/L enhanced colour reduction from 25% to 86% at 298 K. Spontaneity of the process was suggested by negative Gibbs free energy while positive values for enthalpy change showed endothermic nature of the process. Non-linear optimization of error functions resulted in Freundlich and Redlich-Peterson isotherms describing sorption equilibrium data best. The coal fly ash had maximum sorption capacity of 53 mg/g and could thus be used as a low cost adsorbent in melanoidin removal.

Keywords: Adsorption, Isotherms, Melanoidin, South African coal fly ash.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2497
1787 Variogram Fitting Based on the Wilcoxon Norm

Authors: Hazem Al-Mofleh, John Daniels, Joseph McKean

Abstract:

Within geostatistics research, effective estimation of the variogram points has been examined, particularly in developing robust alternatives. The parametric fit of these variogram points which eventually defines the kriging weights, however, has not received the same attention from a robust perspective. This paper proposes the use of the non-linear Wilcoxon norm over weighted non-linear least squares as a robust variogram fitting alternative. First, we introduce the concept of variogram estimation and fitting. Then, as an alternative to non-linear weighted least squares, we discuss the non-linear Wilcoxon estimator. Next, the robustness properties of the non-linear Wilcoxon are demonstrated using a contaminated spatial data set. Finally, under simulated conditions, increasing levels of contaminated spatial processes have their variograms points estimated and fit. In the fitting of these variogram points, both non-linear Weighted Least Squares and non-linear Wilcoxon fits are examined for efficiency. At all levels of contamination (including 0%), using a robust estimation and robust fitting procedure, the non-weighted Wilcoxon outperforms weighted Least Squares.

Keywords: Non-Linear Wilcoxon, robust estimation, Variogram estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 952
1786 Gaussian Particle Flow Bernoulli Filter for Single Target Tracking

Authors: Hyeongbok Kim, Lingling Zhao, Xiaohong Su, Junjie Wang

Abstract:

The Bernoulli filter is a precise Bayesian filter for single target tracking based on the random finite set theory. The standard Bernoulli filter often underestimates the number of the targets. This study proposes a Gaussian particle flow (GPF) Bernoulli filter employing particle flow to migrate particles from prior to posterior positions to improve the performance of the standard Bernoulli filter. By employing the particle flow filter, the computational speed of the Bernoulli filters is significantly improved. In addition, the GPF Bernoulli filter provides more accurate estimation compared with that of the standard Bernoulli filter. Simulation results confirm the improved tracking performance and computational speed in two- and three-dimensional scenarios compared with other algorithms.

Keywords: Bernoulli filter, particle filter, particle flow filter, random finite sets, target tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 301
1785 Low Cost Chip Set Selection Algorithm for Multi-way Partitioning of Digital System

Authors: Jae Young Park, Soongyu Kwon, Kyu Han Kim, Hyeong Geon Lee, Jong Tae Kim

Abstract:

This paper considers the problem of finding low cost chip set for a minimum cost partitioning of a large logic circuits. Chip sets are selected from a given library. Each chip in the library has a different price, area, and I/O pin. We propose a low cost chip set selection algorithm. Inputs to the algorithm are a netlist and a chip information in the library. Output is a list of chip sets satisfied with area and maximum partitioning number and it is sorted by cost. The algorithm finds the sorted list of chip sets from minimum cost to maximum cost. We used MCNC benchmark circuits for experiments. The experimental results show that all of chip sets found satisfy the multiple partitioning constraints.

Keywords: lowest cost chip set, MCNC benchmark, multi-way partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
1784 Pomelo Peel: Agricultural Waste for Biosorption of Cadmium Ions from Aqueous Solutions

Authors: Wanna Saikaew, Pairat Kaewsarn, Wuthikorn Saikaew

Abstract:

The ability of pomelo peel, a natural biosorbent, to remove Cd(II) ions from aqueous solution by biosorption was investigated. The experiments were carried out by batch method at 25 °C. The influence of solution pH, initial cadmium ion concentrations and contact times were evaluated. Cadmium ion removal increased significantly as the pH of the solution increased from pH 1 to pH 5. At pH 5, the cadmium ion removal reached a maximum value. The equilibrium process was described well by the Langmuir isotherm model, with a maximum biosorption capacity of 21.83 mg/g. The biosorption was relatively quick, (approx. 20 min). Biosorption kinetics followed a pseudo-second-order model. The result showed that pomelo peel was effective as a biosorbent for removing cadmium ions from aqueous solution. It is a low cost material that shows potential to be applied in wastewater technology for remediation of heavy metal contamination.

Keywords: Pomelo peel, biosorption, Cadmium ions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3515
1783 Efficient Hardware Implementation of an Elliptic Curve Cryptographic Processor Over GF (2 163)

Authors: Massoud Masoumi, Hosseyn Mahdizadeh

Abstract:

A new and highly efficient architecture for elliptic curve scalar point multiplication which is optimized for a binary field recommended by NIST and is well-suited for elliptic curve cryptographic (ECC) applications is presented. To achieve the maximum architectural and timing improvements we have reorganized and reordered the critical path of the Lopez-Dahab scalar point multiplication architecture such that logic structures are implemented in parallel and operations in the critical path are diverted to noncritical paths. With G=41, the proposed design is capable of performing a field multiplication over the extension field with degree 163 in 11.92 s with the maximum achievable frequency of 251 MHz on Xilinx Virtex-4 (XC4VLX200) while 22% of the chip area is occupied, where G is the digit size of the underlying digit-serial finite field multiplier.

Keywords: Elliptic curve cryptography, FPGA implementation, scalar point multiplication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2530
1782 Assessment of Cadmium Level in Water from Watershed of the Kowsar Dam

Authors: Fardin Boustani

Abstract:

The Kowsar dam supply water for different usages such as drinking, industrial, agricultural and aquaculture farms usages and located next to the city of Dehdashat in Kohgiluye and Boyerahmad province in southern Iran. There are some towns and villages on the Kowsar dam watersheds, which Dehdasht and Choram are the most important and populated cities in this area. The study was undertaken to assess the status of water quality in the urban areas of the Kowsar dam. A total of 28 water samples were collected from 6 stations on surface water and 1 station from groundwater on the watershed of the Kowsar dam. All the samples were analyzed for Cd concentration using standard procedures. The results were compared with other national and international standards. Among the analyzed samples, as the maximum value of cadmium (1.131 μg/L) was observed on the station 2 at the winter 2009, all the samples analyzed were within the maximum admissible limits by the United States Environmental Protection Agency, EU, WHO, New Zealand , Australian, Iranian, and the Indian standards. In general results of the present study have shown that Cd mean values of stations No. 4, 1 and 2 with 0.5135, 0.0.4733 and 0.4573 μg/L respectively are higher than the other stations . Although Cd level of all samples and stations have had normal values but this is an indication of pollution potential and hazards because of human activity and waste water of towns in the areas, which can effect on human health implications in future. This research, therefore, recommends the government and other responsible authorities to take suitable improving measures in the Kowsar dam watershed-s.

Keywords: Kowsar dam, Drinking water quality, Cadmium, Maximum admissible limit, World health organization,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
1781 Frequency Offset Estimation Schemes Based On ML for OFDM Systems in Non-Gaussian Noise Environments

Authors: Keunhong Chae, Seokho Yoon

Abstract:

In this paper, frequency offset (FO) estimation schemes robust to the non-Gaussian noise environments are proposed for orthogonal frequency division multiplexing (OFDM) systems. First, a maximum-likelihood (ML) estimation scheme in non-Gaussian noise environments is proposed, and then, the complexity of the ML estimation scheme is reduced by employing a reduced set of candidate values. In numerical results, it is demonstrated that the proposed schemes provide a significant performance improvement over the conventional estimation scheme in non-Gaussian noise environments while maintaining the performance similar to the estimation performance in Gaussian noise environments.

Keywords: Frequency offset estimation, maximum-likelihood, non-Gaussian noise environment, OFDM, training symbol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926
1780 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
1779 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication

Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa

Abstract:

Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.

Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
1778 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area

Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez

Abstract:

Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.

Keywords: Kriging, map, tropospheric ozone, variogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
1777 Microbial Oil Production by Monoculture and Mixed Cultures of Microalgae and Oleaginous Yeasts using Sugarcane Juice as Substrate

Authors: Thidarat Papone, Supaporn Kookkhunthod, Ratanaporn Leesing

Abstract:

Monoculture and mixed cultures of microalgae and the oleaginous yeast for microbial oil productions were investigated using sugarcane juice as carbon substrate. The monoculture of yeast Torulaspora maleeae Y30, Torulaspora globosa YU5/2 grew faster than that of microalgae Chlorella sp. KKU-S2. In monoculture of T. maleeae Y30, a biomass of 8.267g/L with lipid yield of 0.920g/L were obtained, while 8.333g/L of biomass with lipid yield of 1.141g/L were obtained for monoculture of T. globosa YU5/2. A biomass of 1.933g/L with lipid yield of 0.052g/L was found for monoculture of Chlorella sp. KKU-S2. The biomass concentration in the mixed culture of the oleaginous yeast with microalgae increased faster and was higher compared with that in the monocultures. A biomass of 8.733g/L with lipid yield of 1.564g/L was obtained for a mixed culture of T. maleeae Y30 with Chlorella sp. KKU-S2, while 8.010g/L of biomass with lipid yield of 2.424g/L was found for mixed culture of T. globosa YU5/2 with Chlorella sp. KKU-S2. Maximum cell yield coefficient (YX/S, g/L) was found of 0.323 in monoculture of Chlorella sp. KKU-S2 but low level of both specific yield of lipid (YP/X, g lipid/g cells) of 0.027 and volumetric lipid production rate (QP, g/L/d) of 0.003 were observed. While, maximum YP/X (0.303), QP (0.105) and maximum process product yield (YP/S, 0.061) were obtained in mixed culture of T. globosa YU5/2 with Chlorella sp. KKU-S2. The results obtained from the study shows that mixed culture of yeast with microalgae is a desirable cultivation process for microbial oil production.

Keywords: Microbial oil, Chlorella sp. KKU-S2, Torulaspora maleeae Y30, Torulaspora globosa YU5/2, mixed culture, biodiesel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2929
1776 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel

Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi

Abstract:

Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.

Keywords: AWGN, BER, DCT, Fading, MAP, UEP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
1775 Computing Maximum Uniquely Restricted Matchings in Restricted Interval Graphs

Authors: Swapnil Gupta, C. Pandu Rangan

Abstract:

A uniquely restricted matching is defined to be a matching M whose matched vertices induces a sub-graph which has only one perfect matching. In this paper, we make progress on the open question of the status of this problem on interval graphs (graphs obtained as the intersection graph of intervals on a line). We give an algorithm to compute maximum cardinality uniquely restricted matchings on certain sub-classes of interval graphs. We consider two sub-classes of interval graphs, the former contained in the latter, and give O(|E|^2) time algorithms for both of them. It is to be noted that both sub-classes are incomparable to proper interval graphs (graphs obtained as the intersection graph of intervals in which no interval completely contains another interval), on which the problem can be solved in polynomial time.

Keywords: Uniquely restricted matching, interval graph, design and analysis of algorithms, matching, induced matching, witness counting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524