Search results for: Discrete shifts
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 614

Search results for: Discrete shifts

44 Analytical Study of Sedimentation Formation in Lined Canals using the SHARC Software- A Case Study of the Western Intake Structure in Dez Diversion Weir in Dezful, Iran

Authors: A.H. Sajedipoor, N. Hedayat, M. Mashal

Abstract:

Sedimentation is a hydraulic phenomenon that is emerging as a serious challenge in river engineering. When the flow reaches a certain state that gather potential energy, it shifts the sediment load along channel bed. The transport of such materials can be in the form of suspended and bed loads. The movement of these along the river course and channels and the ways in which this could influence the water intakes is considered as the major challenges for sustainable O&M of hydraulic structures. This could be very serious in arid and semi-arid regions like Iran, where inappropriate watershed management could lead to shifting a great deal of sediments into the reservoirs and irrigation systems. This paper aims to investigate sedimentation in the Western Canal of Dez Diversion Weir in Iran, identifying factors which influence the process and provide ways in which to mitigate its detrimental effects by using the SHARC Software. For the purpose of this paper, data from the Dezful water authority and Dezful Hydrometric Station pertinent to a river course of about 6 Km were used. Results estimated sand and silt bed loads concentrations to be 193 ppm and 827ppm respectively. Given the available data on average annual bed loads and average suspended sediment loads of 165ppm and 837ppm, there was a significant statistical difference (16%) between the sand grains, whereas no significant difference (1.2%) was find in the silt grain sizes. One explanation for such finding being that along the 6 Km river course there was considerable meandering effects which explains recent shift in the hydraulic behavior along the stream course under investigation. The sand concentration in downstream relative to present state of the canal showed a steep descending curve. Sediment trapping on the other hand indicated a steep ascending curve. These occurred because the diversion weir was not considered in the simulation model.

Keywords: SHARC model, sedimentation, Western canal, Dezdiversion weir

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
43 Financing Decision and Productivity Growth for the Venture Capital Industry Using High-Order Fuzzy Time Series

Authors: Shang-En Yu

Abstract:

Human society, there are many uncertainties, such as economic growth rate forecast of the financial crisis, many scholars have, since the the Song Chissom two scholars in 1993 the concept of the so-called fuzzy time series (Fuzzy Time Series)different mode to deal with these problems, a previous study, however, usually does not consider the relevant variables selected and fuzzy process based solely on subjective opinions the fuzzy semantic discrete, so can not objectively reflect the characteristics of the data set, in addition to carrying outforecasts are often fuzzy rules as equally important, failed to consider the importance of each fuzzy rule. For these reasons, the variable selection (Factor Selection) through self-organizing map (Self-Organizing Map, SOM) and proposed high-end weighted multivariate fuzzy time series model based on fuzzy neural network (Fuzzy-BPN), and using the the sequential weighted average operator (Ordered Weighted Averaging operator, OWA) weighted prediction. Therefore, in order to verify the proposed method, the Taiwan stock exchange (Taiwan Stock Exchange Corporation) Taiwan Weighted Stock Index (Taiwan Stock Exchange Capitalization Weighted Stock Index, TAIEX) as experimental forecast target, in order to filter the appropriate variables in the experiment Finally, included in other studies in recent years mode in conjunction with this study, the results showed that the predictive ability of this study further improve.

Keywords: Heterogeneity, residential mortgage loans, foreclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348
42 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen

Abstract:

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other.

As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Keywords: AIS, ANN, ECG, hybrid classifiers, PSO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
41 Comparative Parametric Analysis on the Dynamic Response of Fibre Composite Beams with Debonding

Authors: Indunil Jayatilake, Warna Karunasena

Abstract:

Fiber Reinforced Polymer (FRP) composites enjoy an array of applications ranging from aerospace, marine and military to automobile, recreational and civil industry due to their outstanding properties. A structural glass fiber reinforced polymer (GFRP) composite sandwich panel made from E-glass fiber skin and a modified phenolic core has been manufactured in Australia for civil engineering applications. One of the major mechanisms of damage in FRP composites is skin-core debonding. The presence of debonding is of great concern not only because it severely affects the strength but also it modifies the dynamic characteristics of the structure, including natural frequency and vibration modes. This paper deals with the investigation of the dynamic characteristics of a GFRP beam with single and multiple debonding by finite element based numerical simulations and analyses using the STRAND7 finite element (FE) software package. Three-dimensional computer models have been developed and numerical simulations were done to assess the dynamic behavior. The FE model developed has been validated with published experimental, analytical and numerical results for fully bonded as well as debonded beams. A comparative analysis is carried out based on a comprehensive parametric investigation. It is observed that the reduction in natural frequency is more affected by single debonding than the equally sized multiple debonding regions located symmetrically to the single debonding position. Thus it is revealed that a large single debonding area leads to more damage in terms of natural frequency reduction than isolated small debonding zones of equivalent area, appearing in the GFRP beam. Furthermore, the extents of natural frequency shifts seem mode-dependent and do not seem to have a monotonous trend of increasing with the mode numbers.

Keywords: Debonding, dynamic response, finite element modelling, FRP beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 464
40 Entropic Measures of a Probability Sample Space and Exponential Type (α, β) Entropy

Authors: Rajkumar Verma, Bhu Dev Sharma

Abstract:

Entropy is a key measure in studies related to information theory and its many applications. Campbell for the first time recognized that the exponential of the Shannon’s entropy is just the size of the sample space, when distribution is uniform. Here is the idea to study exponentials of Shannon’s and those other entropy generalizations that involve logarithmic function for a probability distribution in general. In this paper, we introduce a measure of sample space, called ‘entropic measure of a sample space’, with respect to the underlying distribution. It is shown in both discrete and continuous cases that this new measure depends on the parameters of the distribution on the sample space - same sample space having different ‘entropic measures’ depending on the distributions defined on it. It was noted that Campbell’s idea applied for R`enyi’s parametric entropy of a given order also. Knowing that parameters play a role in providing suitable choices and extended applications, paper studies parametric entropic measures of sample spaces also. Exponential entropies related to Shannon’s and those generalizations that have logarithmic functions, i.e. are additive have been studies for wider understanding and applications. We propose and study exponential entropies corresponding to non additive entropies of type (α, β), which include Havard and Charvˆat entropy as a special case.

Keywords: Sample space, Probability distributions, Shannon’s entropy, R`enyi’s entropy, Non-additive entropies .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3350
39 Effect of Urea Deep Placement Technology Adoption on the Production Frontier: Evidence from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, William Adzawla

Abstract:

Rice is an important staple crop, with current demand higher than the domestic supply in Ghana. This has led to a high and unfavourable import bill. Therefore, recent policies and interventions in the agricultural sub-sector aim at promoting various improved agricultural technologies in order to improve domestic production and reduce the importation of rice. In this study, we examined the effect of the adoption of Urea Deep Placement (UDP) technology by rice farmers on the position of the production frontier. This involved 200 farmers selected through a multi stage sampling technique in the Northern region of Ghana. A Cobb-Douglas stochastic frontier model was fitted. The result showed that the adoption of UDP technology shifts the output frontier outward and also move the farmers closer to the frontier. Farmers were also operating under diminishing returns to scale which calls for redress. Other factors that significantly influenced rice production were farm size, labour, use of certified seeds and NPK fertilizer. Although there was an opportunity for improvement, the farmers were highly efficient (92%), compared to previous studies. Farmers’ efficiency was improved through increased education, household size, experience, access to credit, and lack of extension service provision by MoFA. The study recommends the revision of Ghana’s agricultural policy to include the UDP technology. Agricultural Extension officers of the Ministry of Food and Agriculture (MoFA) should be trained on the UDP technology to support IFDC’s drive to improve adoption by rice farmers. Rice farmers are also encouraged to expand their farm lands, improve plant population, and also increase the usage of fertilizer to improve yields. Mechanisms through which credit can be made easily accessible and effectively utilised should be identified and promoted.

Keywords: Efficiency, rice farmers, stochastic frontier, UDP technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 896
38 Performance Evaluation of an ANC-based Hybrid Algorithm for Multi-target Wideband Active Sonar Echolocation System

Authors: Jason Chien-Hsun Tseng

Abstract:

This paper evaluates performances of an adaptive noise cancelling (ANC) based target detection algorithm on a set of real test data supported by the Defense Evaluation Research Agency (DERA UK) for multi-target wideband active sonar echolocation system. The hybrid algorithm proposed is a combination of an adaptive ANC neuro-fuzzy scheme in the first instance and followed by an iterative optimum target motion estimation (TME) scheme. The neuro-fuzzy scheme is based on the adaptive noise cancelling concept with the core processor of ANFIS (adaptive neuro-fuzzy inference system) to provide an effective fine tuned signal. The resultant output is then sent as an input to the optimum TME scheme composed of twogauge trimmed-mean (TM) levelization, discrete wavelet denoising (WDeN), and optimal continuous wavelet transform (CWT) for further denosing and targets identification. Its aim is to recover the contact signals in an effective and efficient manner and then determine the Doppler motion (radial range, velocity and acceleration) at very low signal-to-noise ratio (SNR). Quantitative results have shown that the hybrid algorithm have excellent performance in predicting targets- Doppler motion within various target strength with the maximum false detection of 1.5%.

Keywords: Wideband Active Sonar Echolocation, ANC Neuro-Fuzzy, Wavelet Denoise, CWT, Hybrid Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
37 Computational Fluid Dynamics Simulation Approach for Developing a Powder Dispensing Device

Authors: Rallapalli Revanth, Shivakumar Bhavi, Vijay Kumar Turaga

Abstract:

Dispensing powders manually can be difficult as it requires to gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and is user dependent and it is also difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various powder dispensing mechanisms are being designed to overcome these challenges. Battery operated screw conveyor mechanism is being innovated to overcome above problems faced. These inventions are numerically evaluated at concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in the development of such devices, saving time and money by reducing the number of prototypes and testing. In this study, powder dispensation from the trocar's end is simulated by using the Dense Discrete Phase Model technique along with Kinetic Theory of Granular Flow. The powder is viewed as a secondary flow in air (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation side is done by rotation of the screw conveyor. The performance is calculated for 1 sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and the effective area within a quick turnaround time frame.

Keywords: Multiphase flow, screw conveyor, transient, DDPM - KTGF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 309
36 Exploring Dimensionality, Systematic Mutations and Number of Contacts in Simple HP ab-initio Protein Folding Using a Blackboard-based Agent Platform

Authors: Hiram I. Beltrán, Arturo Rojo-Domínguez, Máximo Eduardo Sánchez Gutiérrez, Pedro Pablo González Pérez

Abstract:

A computational platform is presented in this contribution. It has been designed as a virtual laboratory to be used for exploring optimization algorithms in biological problems. This platform is built on a blackboard-based agent architecture. As a test case, the version of the platform presented here is devoted to the study of protein folding, initially with a bead-like description of the chain and with the widely used model of hydrophobic and polar residues (HP model). Some details of the platform design are presented along with its capabilities and also are revised some explorations of the protein folding problems with different types of discrete space. It is also shown the capability of the platform to incorporate specific tools for the structural analysis of the runs in order to understand and improve the optimization process. Accordingly, the results obtained demonstrate that the ensemble of computational tools into a single platform is worthwhile by itself, since experiments developed on it can be designed to fulfill different levels of information in a self-consistent fashion. By now, it is being explored how an experiment design can be useful to create a computational agent to be included within the platform. These inclusions of designed agents –or software pieces– are useful for the better accomplishment of the tasks to be developed by the platform. Clearly, while the number of agents increases the new version of the virtual laboratory thus enhances in robustness and functionality.

Keywords: genetic algorithms, multi-agent systems, bioinformatics, optimization, protein folding, structural biology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
35 Thin Bed Reservoir Delineation Using Spectral Decomposition and Instantaneous Seismic Attributes, Pohokura Field, Taranaki Basin, New Zealand

Authors: P. Sophon, M. Kruachanta, S. Chaisri, G. Leaungvongpaisan, P. Wongpornchai

Abstract:

The thick bed hydrocarbon reservoirs are primarily interested because of the more prolific production. When the amount of petroleum in the thick bed starts decreasing, the thin bed reservoirs are the alternative targets to maintain the reserves. The conventional interpretation of seismic data cannot delineate the thin bed having thickness less than the vertical seismic resolution. Therefore, spectral decomposition and instantaneous seismic attributes were used to delineate the thin bed in this study. Short Window Discrete Fourier Transform (SWDFT) spectral decomposition and instantaneous frequency attributes were used to reveal the thin bed reservoir, while Continuous Wavelet Transform (CWT) spectral decomposition and envelope (instantaneous amplitude) attributes were used to indicate hydrocarbon bearing zone. The study area is located in the Pohokura Field, Taranaki Basin, New Zealand. The thin bed target is the uppermost part of Mangahewa Formation, the most productive in the gas-condensate production in the Pohokura Field. According to the time-frequency analysis, SWDFT spectral decomposition can reveal the thin bed using a 72 Hz SWDFT isofrequency section and map, and that is confirmed by the instantaneous frequency attribute. The envelope attribute showing the high anomaly indicates the hydrocarbon accumulation area at the thin bed target. Moreover, the CWT spectral decomposition shows the low-frequency shadow zone and abnormal seismic attenuation in the higher isofrequencies below the thin bed confirms that the thin bed can be a prospective hydrocarbon zone.

Keywords: Hydrocarbon indication, instantaneous seismic attribute, spectral decomposition, thin bed delineation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 574
34 Early Registration : Criterion to Improve Communication-Inter Agents in Mobile-IP Protocol

Authors: Hossam el-ddin Mostafa, Pavel Čičak

Abstract:

In IETF RFC 2002, Mobile-IP was developed to enable Laptobs to maintain Internet connectivity while moving between subnets. However, the packet loss that comes from switching subnets arises because network connectivity is lost while the mobile host registers with the foreign agent and this encounters large end-to-end packet delays. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated to describe the system in discrete events. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T and TFTP server S/W is created. Finally, stand-alone performance simulations from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-toend packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure-based early registration. Furthermore, it reported packets flow between subnets to improve losses between subnets.

Keywords: Cisco configuration, handoff, Mobile-IP, packetdelay, Petri-Nets, registration process, Simulink

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347
33 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid

Authors: Sunitha. S.L., V. Udayashankara

Abstract:

Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.

Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
32 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974
31 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
30 The Integration Process of Non-EU Citizens in Luxembourg: From an Empirical Approach Toward a Theoretical Model

Authors: Angela Odero, Chrysoula Karathanasi, Michèle Baumann

Abstract:

Integration of foreign communities has been a forefront issue in Luxembourg for some time now. The country’s continued progress depends largely on the successful integration of immigrants. The aim of our study was to analyze factors which intervene in the course of integration of Non-EU citizens through the discourse of Non-EU citizens residing in Luxembourg, who have signed the Welcome and Integration Contract (CAI). The two-year contract offers integration services to assist foreigners in getting settled in the country. Semi-structured focus group discussions with 50 volunteers were held in English, French, Spanish, Serbo-Croatian or Chinese. Participants were asked to talk about their integration experiences. Recorded then transcribed, the transcriptions were analyzed with the help of NVivo 10, a qualitative analysis software. A systematic and reiterative analysis of decomposing and reconstituting was realized through (1) the identification of predetermined categories (difficulties, challenges and integration needs) (2) initial coding – the grouping together of similar ideas (3) axial coding – the regrouping of items from the initial coding in new ways in order to create sub-categories and identify other core dimensions. Our results show that intervening factors include language acquisition, professional career and socio-cultural activities or events. Each of these factors constitutes different components whose weight shifts from person to person and from situation to situation. Connecting these three emergent factors are two elements essential to the success of the immigrant’s integration – the role of time and deliberate effort from the immigrants, the community, and the formal institutions charged with helping immigrants integrate. We propose a theoretical model where the factors described may be classified in terms of how they predispose, facilitate, and / or reinforce the process towards a successful integration. Measures currently in place propose one size fits all programs yet integrative measures which target the family unit and those customized to target groups based on their needs would work best.

Keywords: Integration, Integration Services, Non-EU citizens, Qualitative Analysis, Third Country Nationals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1079
29 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained. 

Keywords: adaptive filtering, Bayesian approach Kalman filtering approach, variance tracking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 546
28 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet

Authors: Amir Moslemi, Amir Movafeghi, Shahab Moradi

Abstract:

One of the most important challenging factors in medical images is nominated as noise. Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjects to low quality due to the noise. Quality of CT images is dependent on absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete Wavelet Transform (DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).

Keywords: Computed Tomography (CT), noise reduction, curve-let, contour-let, Signal to Noise Peak-Peak Ratio (PSNR), Structure Similarity (Ssim), Absorbed Dose to Patient (ADP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2871
27 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
26 FEA for Transient Responses of an S-Shaped Force Transducer with a Viscoelastic Absorber Using a Nonlinear Complex Spring

Authors: T. Yamaguchi, Y. Fujii, A. Takita, T. Kanai

Abstract:

To compute dynamic characteristics of nonlinear viscoelastic springs with elastic structures having huge degree-of-freedom, Yamaguchi proposed a new fast numerical method using finite element method [1]-[2]. In this method, restoring forces of the springs are expressed using power series of their elongation. In the expression, nonlinear hysteresis damping is introduced. In this expression, nonlinear complex spring constants are introduced. Finite element for the nonlinear spring having complex coefficients is expressed and is connected to the elastic structures modeled by linear solid finite element. Further, to save computational time, the discrete equations in physical coordinate are transformed into the nonlinear ordinary coupled equations using normal coordinate corresponding to linear natural modes. In this report, the proposed method is applied to simulation for impact responses of a viscoelastic shock absorber with an elastic structure (an S-shaped structure) by colliding with a concentrated mass. The concentrated mass has initial velocities and collides with the shock absorber. Accelerations of the elastic structure and the concentrated mass are measured using Levitation Mass Method proposed by Fujii [3]. The calculated accelerations from the proposed FEM, corresponds to the experimental ones. Moreover, using this method, we also investigate dynamic errors of the S-shaped force transducer due to elastic mode in the S-shaped structure.

Keywords: Transient response, Finite Element analysis, Numerical analysis, Viscoelastic shock absorber, Force transducer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
25 FPGA Hardware Implementation and Evaluation of a Micro-Network Architecture for Multi-Core Systems

Authors: Yahia Salah, Med Lassaad Kaddachi, Rached Tourki

Abstract:

This paper presents the design, implementation and evaluation of a micro-network, or Network-on-Chip (NoC), based on a generic pipeline router architecture. The router is designed to efficiently support traffic generated by multimedia applications on embedded multi-core systems. It employs a simplest routing mechanism and implements the round-robin scheduling strategy to resolve output port contentions and minimize latency. A virtual channel flow control is applied to avoid the head-of-line blocking problem and enhance performance in the NoC. The hardware design of the router architecture has been implemented at the register transfer level; its functionality is evaluated in the case of the two dimensional Mesh/Torus topology, and performance results are derived from ModelSim simulator and Xilinx ISE 9.2i synthesis tool. An example of a multi-core image processing system utilizing the NoC structure has been implemented and validated to demonstrate the capability of the proposed micro-network architecture. To reduce complexity of the image compression and decompression architecture, the system use image processing algorithm based on classical discrete cosine transform with an efficient zonal processing approach. The experimental results have confirmed that both the proposed image compression scheme and NoC architecture can achieve a reasonable image quality with lower processing time.

Keywords: Generic Pipeline Network-on-Chip Router Architecture, JPEG Image Compression, FPGA Hardware Implementation, Performance Evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3053
24 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm

Authors: Yesubai Rubavathi Charles, Ravi Ramraj

Abstract:

In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.

Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
23 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933
22 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan

Abstract:

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
21 Infrared Lightbox and iPhone App for Improving Detection Limit of Phosphate Detecting Dip Strips

Authors: H. Heidari-Bafroui, B. Ribeiro, A. Charbaji, C. Anagnostopoulos, M. Faghri

Abstract:

In this paper, we report the development of a portable and inexpensive infrared lightbox for improving the detection limits of paper-based phosphate devices. Commercial paper-based devices utilize the molybdenum blue protocol to detect phosphate in the environment. Although these devices are easy to use and have a long shelf life, their main deficiency is their low sensitivity based on the qualitative results obtained via a color chart. To improve the results, we constructed a compact infrared lightbox that communicates wirelessly with a smartphone. The system measures the absorbance of radiation for the molybdenum blue reaction in the infrared region of the spectrum. It consists of a lightbox illuminated by four infrared light-emitting diodes, an infrared digital camera, a Raspberry Pi microcontroller, a mini-router, and an iPhone to control the microcontroller. An iPhone application was also developed to analyze images captured by the infrared camera in order to quantify phosphate concentrations. Additionally, the app connects to an online data center to present a highly scalable worldwide system for tracking and analyzing field measurements. In this study, the detection limits for two popular commercial devices were improved by a factor of 4 for the Quantofix devices (from 1.3 ppm using visible light to 300 ppb using infrared illumination) and a factor of 6 for the Indigo units (from 9.2 ppm to 1.4 ppm) with repeatability of less than or equal to 1.2% relative standard deviation (RSD). The system also provides more granular concentration information compared to the discrete color chart used by commercial devices and it can be easily adapted for use in other applications.

Keywords: Infrared lightbox, paper-based device, phosphate detection, smartphone colorimetric analyzer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584
20 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: Discrete elements, Hertzian Contact, polydispersity, weakly nonlinear, wave propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
19 Stereo Motion Tracking

Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2783
18 Mechanical Behavior of Recycled Pet Fiber Reinforced Concrete Matrix

Authors: Comingstarful Marthong, Deba Kumar Sarma

Abstract:

Concrete is strong in compression however weak in tension. The tensile strength as well as ductile property of concrete could be improved by addition of short dispersed fibers. Polyethylene terephthalate (PET) fiber obtained from hand cutting or mechanical slitting of plastic sheets generally used as discrete reinforcement in substitution of steel fiber. PET fiber obtained from the former process is in the form of straight slit sheet pattern that impart weaker mechanical bonding behavior in the concrete matrix. To improve the limitation of straight slit sheet fiber the present study considered two additional geometry of fiber namely (a) flattened end slit sheet and (b) deformed slit sheet. The mix for plain concrete was design for a compressive strength of 25 MPa at 28 days curing time with a watercement ratio of 0.5. Cylindrical and beam specimens with 0.5% fibers volume fraction and without fibers were cast to investigate the influence of geometry on the mechanical properties of concrete. The performance parameters mainly studied include flexural strength, splitting tensile strength, compressive strength and ultrasonic pulse velocity (UPV). Test results show that geometry of fiber has a marginal effect on the workability of concrete. However, it plays a significant role in achieving a good compressive and tensile strength of concrete. Further, significant improvement in term of flexural and energy dissipation capacity were observed from other fibers as compared to the straight slit sheet pattern. Also, the inclusion of PET fiber improved the ability in absorbing energy in the post-cracking state of the specimen as well as no significant porous structures.

Keywords: Concrete matrix, polyethylene terephthalate (PET) fibers, mechanical bonding, mechanical properties, UPV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
17 Combination of Different Classifiers for Cardiac Arrhythmia Recognition

Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari

Abstract:

This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.

Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2183
16 Effect of Modification and Expansion on Emergence of Cooperation in Demographic Multi-Level Donor-Recipient Game

Authors: Tsuneyuki Namekata, Yoko Namekata

Abstract:

It is known that the mean investment evolves from a very low initial value to some high level in the Continuous Prisoner's Dilemma. We examine how the cooperation level evolves from a low initial level to a high level in our Demographic Multi-level Donor-Recipient situation. In the Multi-level Donor-Recipient game, one player is selected as a Donor and the other as a Recipient randomly. The Donor has multiple cooperative moves and one defective move. A cooperative move means the Donor pays some cost for the Recipient to receive some benefit. The more cooperative move the Donor takes, the higher cost the Donor pays and the higher benefit the Recipient receives. The defective move has no effect on them. Two consecutive Multi-level Donor-Recipient games, one as a Donor and the other as a Recipient, can be viewed as a discrete version of the Continuous Prisoner's Dilemma. In the Demographic Multi-level Donor-Recipient game, players are initially distributed spatially. In each period, players play multiple Multi-level Donor-Recipient games against other players. He leaves offspring if possible and dies because of negative accumulated payoff of him or his lifespan. Cooperative moves are necessary for the survival of the whole population. There is only a low level of cooperative move besides the defective move initially available in strategies of players. A player may modify and expand his strategy by his recent experiences or practices. We distinguish several types of a player about modification and expansion. We show, by Agent-Based Simulation, that introducing only the modification increases the emergence rate of cooperation and introducing both the modification and the expansion further increases it and a high level of cooperation does emerge in our Demographic Multi-level Donor-Recipient Game.

Keywords: Agent-based simulation, donor-recipient game, emergence of cooperation, spatial structure, TFT, TF2T.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826
15 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel

Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi

Abstract:

Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.

Keywords: AWGN, BER, DCT, Fading, MAP, UEP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634