Search results for: Inverse Probability Weighting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 919

Search results for: Inverse Probability Weighting

649 Performance Analysis of the Time-Based and Periodogram-Based Energy Detector for Spectrum Sensing

Authors: Sadaf Nawaz, Adnan Ahmed Khan, Asad Mahmood, Chaudhary Farrukh Javed

Abstract:

Classically, an energy detector is implemented in time domain (TD). However, frequency domain (FD) based energy detector has demonstrated an improved performance. This paper presents a comparison between the two approaches as to analyze their pros and cons. A detailed performance analysis of the classical TD energy-detector and the periodogram based detector is performed. Exact and approximate mathematical expressions for probability of false alarm (Pf) and probability of detection (Pd) are derived for both approaches. The derived expressions naturally lead to an analytical as well as intuitive reasoning for the improved performance of (Pf) and (Pd) in different scenarios. Our analysis suggests the dependence improvement on buffer sizes. Pf is improved in FD, whereas Pd is enhanced in TD based energy detectors. Finally, Monte Carlo simulations results demonstrate the analysis reached by the derived expressions.

Keywords: Cognitive radio, energy detector, periodogram, spectrum sensing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991
648 Noise-Improved Signal Detection in Nonlinear Threshold Systems

Authors: Youguo Wang, Lenan Wu

Abstract:

We discuss the signal detection through nonlinear threshold systems. The detection performance is assessed by the probability of error Per . We establish that: (1) when the signal is complete suprathreshold, noise always degrades the signal detection both in the single threshold system and in the parallel array of threshold devices. (2) When the signal is a little subthreshold, noise degrades signal detection in the single threshold system. But in the parallel array, noise can improve signal detection, i.e., stochastic resonance (SR) exists in the array. (3) When the signal is predominant subthreshold, noise always can improve signal detection and SR always exists not only in the single threshold system but also in the parallel array. (4) Array can improve signal detection by raising the number of threshold devices. These results extend further the applicability of SR in signal detection.

Keywords: Probability of error, signal detection, stochasticresonance, threshold system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
647 Evaluating Mechanical Properties of CoNiCrAlY Coating from Miniature Specimen Testing at Elevated Temperature

Authors: W. Wen, G. Jackson, S. Maskill, D. G. McCartney, W. Sun

Abstract:

CoNiCrAlY alloys have been widely used as bond coats for thermal barrier coating (TBC) systems because of low cost, improved control of composition, and the feasibility to tailor the coatings microstructures. Coatings are in general very thin structures, and therefore it is impossible to characterize the mechanical responses of the materials via conventional mechanical testing methods. Due to this reason, miniature specimen testing methods, such as the small punch test technique, have been developed. This paper presents some of the recent research in evaluating the mechanical properties of the CoNiCrAlY coatings at room and high temperatures, through the use of small punch testing and the developed miniature specimen tensile testing, applicable to a range of temperature, to investigate the elastic-plastic and creep behavior as well as ductile-brittle transition temperature (DBTT) behavior. An inverse procedure was developed to derive the mechanical properties from such tests for the coating materials. A two-layer specimen test method is also described. The key findings include: 1) the temperature-dependent coating properties can be accurately determined by the miniature tensile testing within a wide range of temperature; 2) consistent DBTTs can be identified by both the SPT and miniature tensile tests (~ 650 °C); and 3) the FE SPT modelling has shown good capability of simulating the early local cracking. In general, the temperature-dependent material behaviors of the CoNiCrAlY coating has been effectively characterized using miniature specimen testing and inverse method.

Keywords: CoNiCrAlY coatings, mechanical properties, DBTT, miniature specimen testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709
646 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933
645 Modulation Identification Algorithm for Adaptive Demodulator in Software Defined Radios Using Wavelet Transform

Authors: P. Prakasam, M. Madheswaran

Abstract:

A generalized Digital Modulation Identification algorithm for adaptive demodulator has been developed and presented in this paper. The algorithm developed is verified using wavelet Transform and histogram computation to identify QPSK and QAM with GMSK and M–ary FSK modulations. It has been found that the histogram peaks simplifies the procedure for identification. The simulated results show that the correct modulation identification is possible to a lower bound of 5 dB and 12 dB for GMSK and QPSK respectively. When SNR is above 5 dB the throughput of the proposed algorithm is more than 97.8%. The receiver operating characteristics (ROC) has been computed to measure the performance of the proposed algorithm and the analysis shows that the probability of detection (Pd) drops rapidly when SNR is 5 dB and probability of false alarm (Pf) is smaller than 0.3. The performance of the proposed algorithm has been compared with existing methods and found it will identify all digital modulation schemes with low SNR.

Keywords: Bit Error rate, Receiver Operating Characteristics, Software Defined Radio, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2380
644 Reliability Assessment of Bangladesh Power System Using Recursive Algorithm

Authors: Nahid-Al-Masood, Jubaer Ahmed, Amina Hasan Abedin, S. R. Deeba, Faeza Hafiz, Mahmuda Begum

Abstract:

An electric utility-s main concern is to plan, design, operate and maintain its power supply to provide an acceptable level of reliability to its users. This clearly requires that standards of reliability be specified and used in all three sectors of the power system, i.e., generation, transmission and distribution. That is why reliability of a power system is always a major concern to power system planners. This paper presents the reliability analysis of Bangladesh Power System (BPS). Reliability index, loss of load probability (LOLP) of BPS is evaluated using recursive algorithm and considering no de-rated states of generators. BPS has sixty one generators and a total installed capacity of 5275 MW. The maximum demand of BPS is about 5000 MW. The relevant data of the generators and hourly load profiles are collected from the National Load Dispatch Center (NLDC) of Bangladesh and reliability index 'LOLP' is assessed for the period of last ten years.

Keywords: Recursive algorithm, LOLP, forced outage rate, cumulative probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2311
643 Multiparametric Optimization of Water Treatment Process for Thermal Power Plants

Authors: B. Mukanova, N. Glazyrina, S. Glazyrin

Abstract:

The formulated problem of optimization of the technological process of water treatment for thermal power plants is considered in this article. The problem is of multiparametric nature. To optimize the process, namely, reduce the amount of waste water, a new technology was developed to reuse such water. A mathematical model of the technology of wastewater reuse was developed. Optimization parameters were determined. The model consists of a material balance equation, an equation describing the kinetics of ion exchange for the non-equilibrium case and an equation for the ion exchange isotherm. The material balance equation includes a nonlinear term that depends on the kinetics of ion exchange. A direct problem of calculating the impurity concentration at the outlet of the water treatment plant was numerically solved. The direct problem was approximated by an implicit point-to-point computation difference scheme. The inverse problem was formulated as relates to determination of the parameters of the mathematical model of the water treatment plant operating in non-equilibrium conditions. The formulated inverse problem was solved. Following the results of calculation the time of start of the filter regeneration process was determined, as well as the period of regeneration process and the amount of regeneration and wash water. Multi-parameter optimization of water treatment process for thermal power plants allowed decreasing the amount of wastewater by 15%.

Keywords: Direct problem, multiparametric optimization, optimization parameters, water treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
642 Real Time Speed Estimation of Vehicles

Authors: Azhar Hussain, Kashif Shahzad, Chunming Tang

Abstract:

this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.

Keywords: Defuzzification, Fuzzy similarity approach, lane cropping, Maximum a Posterior Probability (MAP) estimator, Speed estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2762
641 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography

Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi

Abstract:

Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.

Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
640 Solving Linear Matrix Equations by Matrix Decompositions

Authors: Yongxin Yuan, Kezheng Zuo

Abstract:

In this paper, a system of linear matrix equations is considered. A new necessary and sufficient condition for the consistency of the equations is derived by means of the generalized singular-value decomposition, and the explicit representation of the general solution is provided.

Keywords: Matrix equation, Generalized inverse, Generalized singular-value decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2006
639 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49
638 Continuous Wave Interference Effects on Global Position System Signal Quality

Authors: Fang Ye, Han Yu, Yibing Li

Abstract:

Radio interference is one of the major concerns in using the global positioning system (GPS) for civilian and military applications. Interference signals are produced not only through all electronic systems but also illegal jammers. Among different types of interferences, continuous wave (CW) interference has strong adverse impacts on the quality of the received signal. In this paper, we make more detailed analysis for CW interference effects on GPS signal quality. Based on the C/A code spectrum lines, the influence of CW interference on the acquisition performance of GPS receivers is further analysed. This influence is supported by simulation results using GPS software receiver. As the most important user parameter of GPS receivers, the mathematical expression of bit error probability is also derived in the presence of CW interference, and the expression is consistent with the Monte Carlo simulation results. The research on CW interference provides some theoretical gist and new thoughts on monitoring the radio noise environment and improving the anti-jamming ability of GPS receivers.

Keywords: GPS, CW interference, acquisition performance, bit error probability, Monte Carlo.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
637 Performance Analysis of a Dynamic Channel Reservation-Like Technique for Low Earth Orbit Mobile Satellite Systems

Authors: W. Kiamouche, S. Lasmari, M. Benslama

Abstract:

In order to derive important parameters concerning mobile subscriber MS with ongoing calls in Low Earth Orbit Mobile Satellite Systems LEO MSSs, a positioning system had to be integrated into MSS in order to localize mobile subscribers MSs and track them during the connection. Such integration is regarded as a complex implementation. We propose in this paper a novel method based on advantages of mobility model of Low Earth Orbit Mobile Satellite System LEO MSS which allows the evaluation of instant of subsequent handover of a MS even if its location is unknown. This method is utilized to propose a Dynamic Channel Reservation DCRlike scheme based on the DCR scheme previously proposed in literature. Results presented show that DCR-like technique gives different QoS performance than DCR. Indeed, an improve in handover blocking probability and an increase in new call blocking probability are observed for the DCR-like technique.

Keywords: cellular layout, DCR, LEO mobile satellite system, mobility model, positioning system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
636 Nonconforming Control Charts for Zero-Inflated Poisson Distribution

Authors: N. Katemee, T. Mayureesawan

Abstract:

This paper developed the c-Chart based on a Zero- Inflated Poisson (ZIP) processes that approximated by a geometric distribution with parameter p. The p estimated that fit for ZIP distribution used in calculated the mean, median, and variance of geometric distribution for constructed the c-Chart by three difference methods. For cg-Chart, developed c-Chart by used the mean and variance of the geometric distribution constructed control limits. For cmg-Chart, the mean used for constructed the control limits. The cme- Chart, developed control limits of c-Chart from median and variance values of geometric distribution. The performance of charts considered from the Average Run Length and Average Coverage Probability. We found that for an in-control process, the cg-Chart is superior for low level of mean at all level of proportion zero. For an out-of-control process, the cmg-Chart and cme-Chart are the best for mean = 2, 3 and 4 at all level of parameter.

Keywords: average coverage probability, average run length, geometric distribution, zero-inflated poisson distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2359
635 Mobile Robot Path Planning Utilizing Probability Recursive Function

Authors: Ethar H. Khalil, Bahaa I. Kazem

Abstract:

In this work a software simulation model has been proposed for two driven wheels mobile robot path planning; that can navigate in dynamic environment with static distributed obstacles. The work involves utilizing Bezier curve method in a proposed N order matrix form; for engineering the mobile robot path. The Bezier curve drawbacks in this field have been diagnosed. Two directions: Up and Right function has been proposed; Probability Recursive Function (PRF) to overcome those drawbacks. PRF functionality has been developed through a proposed; obstacle detection function, optimization function which has the capability of prediction the optimum path without comparison between all feasible paths, and N order Bezier curve function that ensures the drawing of the obtained path. The simulation results that have been taken showed; the mobile robot travels successfully from starting point and reaching its goal point. All obstacles that are located in its way have been avoided. This navigation is being done successfully using the proposed PRF techniques.

Keywords: Mobile robot, path planning, Bezier curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
634 Fragility Analysis of Weir Structure Subjected to Flooding Water Damage

Authors: Oh Hyeon Jeon, WooYoung Jung

Abstract:

In this study, seepage analysis was performed by the level difference between upstream and downstream of weir structure for safety evaluation of weir structure against flooding. Monte Carlo Simulation method was employed by considering the probability distribution of the adjacent ground parameter, i.e., permeability coefficient of weir structure. Moreover, by using a commercially available finite element program (ABAQUS), modeling of the weir structure is carried out. Based on this model, the characteristic of water seepage during flooding was determined at each water level with consideration of the uncertainty of their corresponding permeability coefficient. Subsequently, fragility function could be constructed based on this response from numerical analysis; this fragility function results could be used to determine the weakness of weir structure subjected to flooding disaster. They can also be used as a reference data that can comprehensively predict the probability of failur,e and the degree of damage of a weir structure.

Keywords: Weir structure, seepage, flood disaster fragility, probabilistic risk assessment, Monte-Carlo Simulation, permeability coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
633 Video-On-Demand QoE Evaluation across Different Age-Groups and Its Significance for Network Capacity

Authors: Mujtaba Roshan, John A. Schormans

Abstract:

Quality of Experience (QoE) drives churn in the broadband networks industry, and good QoE plays a large part in the retention of customers. QoE is known to be affected by the Quality of Service (QoS) factors packet loss probability (PLP), delay and delay jitter caused by the network. Earlier results have shown that the relationship between these QoS factors and QoE is non-linear, and may vary from application to application. We use the network emulator Netem as the basis for experimentation, and evaluate how QoE varies as we change the emulated QoS metrics. Focusing on Video-on-Demand, we discovered that the reported QoE may differ widely for users of different age groups, and that the most demanding age group (the youngest) can require an order of magnitude lower PLP to achieve the same QoE than is required by the most widely studied age group of users. We then used a bottleneck TCP model to evaluate the capacity cost of achieving an order of magnitude decrease in PLP, and found it be (almost always) a 3-fold increase in link capacity that was required.

Keywords: Quality of experience, quality of service, packet loss probability, network capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 898
632 Efficient Detection Using Sequential Probability Ratio Test in Mobile Cognitive Radio Systems

Authors: Yeon-Jea Cho, Sang-Uk Park, Won-Chul Choi, Dong-Jo Park

Abstract:

This paper proposes a smart design strategy for a sequential detector to reliably detect the primary user-s signal, especially in fast fading environments. We study the computation of the log-likelihood ratio for coping with a fast changing received signal and noise sample variances, which are considered random variables. First, we analyze the detectability of the conventional generalized log-likelihood ratio (GLLR) scheme when considering fast changing statistics of unknown parameters caused by fast fading effects. Secondly, we propose an efficient sensing algorithm for performing the sequential probability ratio test in a robust and efficient manner when the channel statistics are unknown. Finally, the proposed scheme is compared to the conventional method with simulation results with respect to the average number of samples required to reach a detection decision.

Keywords: Cognitive radio, fast fading, sequential detection, spectrum sensing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
631 Sparsity-Aware Affine Projection Algorithm for System Identification

Authors: Young-Seok Choi

Abstract:

This work presents a new type of the affine projection (AP) algorithms which incorporate the sparsity condition of a system. To exploit the sparsity of the system, a weighted l1-norm regularization is imposed on the cost function of the AP algorithm. Minimizing the cost function with a subgradient calculus and choosing two distinct weighting for l1-norm, two stochastic gradient based sparsity regularized AP (SR-AP) algorithms are developed. Experimental results exhibit that the SR-AP algorithms outperform the typical AP counterparts for identifying sparse systems.

Keywords: System identification, adaptive filter, affine projection, sparsity, sparse system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1506
630 Non-equilibrium Statistical Mechanics of a Driven Lattice Gas Model: Probability Function, FDT-violation, and Monte Carlo Simulations

Authors: K. Sudprasert, M. Precharattana, N. Nuttavut, D. Triampo, B. Pattanasiri, Y. Lenbury, W. Triampo

Abstract:

The study of non-equilibrium systems has attracted increasing interest in recent years, mainly due to the lack of theoretical frameworks, unlike their equilibrium counterparts. Studying the steady state and/or simple systems is thus one of the main interests. Hence in this work we have focused our attention on the driven lattice gas model (DLG model) consisting of interacting particles subject to an external field E. The dynamics of the system are given by hopping of particles to nearby empty sites with rates biased for jumps in the direction of E. Having used small two dimensional systems of DLG model, the stochastic properties at nonequilibrium steady state were analytically studied. To understand the non-equilibrium phenomena, we have applied the analytic approach via master equation to calculate probability function and analyze violation of detailed balance in term of the fluctuation-dissipation theorem. Monte Carlo simulations have been performed to validate the analytic results.

Keywords: Non-equilibrium, lattice gas, stochastic process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689
629 A Degraded Practical MIMOME Channel: Issues Insecret Data Communications

Authors: Mohammad Rakibul Islam

Abstract:

In this paper, a Gaussian multiple input multiple output multiple eavesdropper (MIMOME) channel is considered where a transmitter communicates to a receiver in the presence of an eavesdropper. We present a technique for determining the secrecy capacity of the multiple input multiple output (MIMO) channel under Gaussian noise. We transform the degraded MIMOME channel into multiple single input multiple output (SIMO) Gaussian wire-tap channels and then use scalar approach to convert it into two equivalent multiple input single output (MISO) channels. The secrecy capacity model is then developed for the condition where the channel state information (CSI) for main channel only is known to the transmitter. The results show that the secret communication is possible when the eavesdropper channel noise is greater than a cutoff noise level. The outage probability is also analyzed of secrecy capacity is also analyzed. The effect of fading and outage probability is also analyzed.

Keywords: Secrecy capacity, MIMO, wiretap channel, covariance matrix, fading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
628 Iterative Methods for Computing the Weighted Minkowski Inverses of Matrices in Minkowski Space

Authors: Xiaoji Liu, Yonghui Qin

Abstract:

In this note, we consider a family of iterative formula for computing the weighted Minskowski inverses AM,N in Minskowski space, and give two kinds of iterations and the necessary and sufficient conditions of the convergence of iterations.

Keywords: iterative method, the Minskowski inverse, A

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
627 Building Gabor Filters from Retinal Responses

Authors: Johannes Partzsch, Christian Mayr, Rene Schuffny

Abstract:

Starting from a biologically inspired framework, Gabor filters were built up from retinal filters via LMSE algorithms. Asubset of retinal filter kernels was chosen to form a particular Gabor filter by using a weighted sum. One-dimensional optimization approaches were shown to be inappropriate for the problem. All model parameters were fixed with biological or image processing constraints. Detailed analysis of the optimization procedure led to the introduction of a minimization constraint. Finally, quantization of weighting factors was investigated. This resulted in an optimized cascaded structure of a Gabor filter bank implementation with lower computational cost.

Keywords: Gabor filter, image processing, optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2362
626 Carbon Disulfide Production via Hydrogen Sulfide Methane Reformation

Authors: H. Hosseini, M. Javadi, M. Moghiman, M. H. Ghodsi Rad

Abstract:

Carbon disulfide is widely used for the production of viscose rayon, rubber, and other organic materials and it is a feedstock for the synthesis of sulfuric acid. The objective of this paper is to analyze possibilities for efficient production of CS2 from sour natural gas reformation (H2SMR) (2H2S+CH4 =CS2 +4H2) . Also, the effect of H2S to CH4 feed ratio and reaction temperature on carbon disulfide production is investigated numerically in a reforming reactor. The chemical reaction model is based on an assumed Probability Density Function (PDF) parameterized by the mean and variance of mixture fraction and β-PDF shape. The results show that the major factors influencing CS2 production are reactor temperature. The yield of carbon disulfide increases with increasing H2S to CH4 feed gas ratio (H2S/CH4≤4). Also the yield of C(s) increases with increasing temperature until the temperature reaches to 1000°K, and then due to increase of CS2 production and consumption of C(s), yield of C(s) drops with further increase in the temperature. The predicted CH4 and H2S conversion and yield of carbon disulfide are in good agreement with result of Huang and TRaissi.

Keywords: Carbon disulfide, sour natural gas, H2SMR, probability density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5201
625 Solution of The KdV Equation with Asymptotic Degeneracy

Authors: Tapas Kumar Sinha, Joseph Mathew

Abstract:

Recently T. C. Au-Yeung, C.Au, and P. C. W. Fung [2] have given the solution of the KdV equation [1] to the boundary condition , where b is a constant. We have further extended the method of [2] to find the solution of the KdV equation with asymptotic degeneracy. Via simulations we find both bright and dark Solitons (i.e. Solitons with opposite phases).

Keywords: KdV equation, Asymptotic Degeneracy, Solitons, Inverse Scattering

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565
624 Fault Localization and Alarm Correlation in Optical WDM Networks

Authors: G. Ramesh, S. Sundara Vadivelu

Abstract:

For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.

Keywords: Alarm correlation, blocking probability, delay, fault localization, WDM networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
623 Improving Order Quantity Model with Emergency Safety Stock (ESS)

Authors: Yousef Abu Nahleh, Alhasan Hakami, Arun Kumar, Fugen Daver

Abstract:

This study considers the problem of calculating safety stocks in disaster situations inventory systems that face demand uncertainties. Safety stocks are essential to make the supply chain, which is controlled by forecasts of customer needs, in response to demand uncertainties and to reach predefined goal service levels. To solve the problem of uncertainties due to the disaster situations affecting the industry sector, the concept of Emergency Safety Stock (ESS) was proposed. While there exists a huge body of literature on determining safety stock levels, this literature does not address the problem arising due to the disaster and dealing with the situations. In this paper, the problem of improving the Order Quantity Model to deal with uncertainty of demand due to disasters is managed by incorporating a new idea called ESS which is based on the probability of disaster occurrence and uses probability matrix calculated from the historical data. 

Keywords: Emergency Safety Stocks, Safety stocks, Order Quantity Model, Supply chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2821
622 Ship Detection Requirements Analysis for Different Sea States: Validation on Real SAR Data

Authors: Jaime Martín-de-Nicolás, David Mata-Moya, Nerea del-Rey-Maestre, Pedro Gómez-del-Hoyo, María-Pilar Jarabo-Amores

Abstract:

Ship detection is nowadays quite an important issue in tasks related to sea traffic control, fishery management and ship search and rescue. Although it has traditionally been carried out by patrol ships or aircrafts, coverage and weather conditions and sea state can become a problem. Synthetic aperture radars can surpass these coverage limitations and work under any climatological condition. A fast CFAR ship detector based on a robust statistical modeling of sea clutter with respect to sea states in SAR images is used. In this paper, the minimum SNR required to obtain a given detection probability with a given false alarm rate for any sea state is determined. A Gaussian target model using real SAR data is considered. Results show that SNR does not depend heavily on the class considered. Provided there is some variation in the backscattering of targets in SAR imagery, the detection probability is limited and a post-processing stage based on morphology would be suitable.

Keywords: SAR, generalized gamma distribution, detection curves, radar detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129
621 Landowers' Participation Behavior on the Payment for Environmental Service (PES): Evidences from Taiwan

Authors: Wan-Yu Liu

Abstract:

To respond to the Kyoto Protocol, the policy of Payment for Environmental Service (PES), which was entitled “Plain Landscape Afforestation Program (PLAP)", was certified by Executive Yuan in Taiwan on 31 August 2001 and has been implementing for six years since 1 January 2002. Although the PLAP has received a lot of positive comments, there are still many difficulties during the process of implementation, such as insufficient technology for afforestation, private landowners- low interests in participating in PLAP, insufficient subsidies, and so on, which are potential threats that hinder the PLAP from moving forward in future. In this paper, selecting Ping-Tung County in Taiwan as a sample region and targeting those private landowners with and without intention to participate in the PLAP, respectively, we conduct an empirical analysis based on the Logit model to investigate the factors that determine whether those private landowners join the PLAP, so as to realize the incentive effects of the PLAP upon the personal decision on afforestation. The possible factors that might determine private landowner-s participation in the PLAP include landowner-s characteristics, cropland characteristics, as well as policy factors. Among them, the policy factors include afforestation subsidy amount (+), duration of afforestation subsidy (+), the rules on adjoining and adjacent areas (+), and so on, which do not reach the remarkable level in statistics though, but the directions of variable signs are consistent with the intuition behind the policy. As for the landowners- characteristics, each of age (+), education level (–), and annual household income (+) variables reaches 10% of the remarkable level in statistics; as for the cropland characteristics, each of cropland area (+), cropland price (–), and the number of cropland parcels (–) reaches 1% of the remarkable level in statistics. In light of the above, the cropland characteristics are the dominate factor that determines the probability of landowner-s participation in the PLAP. In the Logit model established by this paper, the probability of correctly estimating nonparticipants is 98%, the probability of correctly estimating the participants is 71.8%, and the probability for the overall estimation is 95%. In addition, Hosmer-Lemeshow test and omnibus test also revealed that the Logit model in this paper may provide fine goodness of fit and good predictive power in forecasting private landowners- participation in this program. The empirical result of this paper expects to help the implementation of the afforestation programs in Taiwan.

Keywords: Forestry policy, logit, afforestation subsidy, afforestation policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
620 Non-Rigid Registration of Medical Images Using an Automated Method

Authors: Panos Kotsas

Abstract:

This paper presents the application of a signal intensity independent registration criterion for non-rigid body registration of medical images. The criterion is defined as the weighted ratio image of two images. The ratio is computed on a voxel per voxel basis and weighting is performed by setting the ratios between signal and background voxels to a standard high value. The mean squared value of the weighted ratio is computed over the union of the signal areas of the two images and it is minimized using the Chebyshev polynomial approximation. The geometric transformation model adopted is a local cubic B-splines based model.

Keywords: Medical image, non-rigid, registration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407