Search results for: BLUE estimate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 866

Search results for: BLUE estimate

506 Estimating an Optimal Neighborhood Size in the Spherical Self-Organizing Feature Map

Authors: Alexandros Leontitsis, Archana P. Sangole

Abstract:

This article presents a short discussion on optimum neighborhood size selection in a spherical selforganizing feature map (SOFM). A majority of the literature on the SOFMs have addressed the issue of selecting optimal learning parameters in the case of Cartesian topology SOFMs. However, the use of a Spherical SOFM suggested that the learning aspects of Cartesian topology SOFM are not directly translated. This article presents an approach on how to estimate the neighborhood size of a spherical SOFM based on the data. It adopts the L-curve criterion, previously suggested for choosing the regularization parameter on problems of linear equations where their right-hand-side is contaminated with noise. Simulation results are presented on two artificial 4D data sets of the coupled Hénon-Ikeda map.

Keywords: Parameter estimation, self-organizing feature maps, spherical topology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
505 Exploring the Combinatorics of Motif Alignments Foraccurately Computing E-values from P-values

Authors: T. Kjosmoen, T. Ryen, T. Eftestøl

Abstract:

In biological and biomedical research motif finding tools are important in locating regulatory elements in DNA sequences. There are many such motif finding tools available, which often yield position weight matrices and significance indicators. These indicators, p-values and E-values, describe the likelihood that a motif alignment is generated by the background process, and the expected number of occurrences of the motif in the data set, respectively. The various tools often estimate these indicators differently, making them not directly comparable. One approach for comparing motifs from different tools, is computing the E-value as the product of the p-value and the number of possible alignments in the data set. In this paper we explore the combinatorics of the motif alignment models OOPS, ZOOPS, and ANR, and propose a generic algorithm for computing the number of possible combinations accurately. We also show that using the wrong alignment model can give E-values that significantly diverge from their true values.

Keywords: Motif alignment, combinatorics, p-value, E-value, OOPS, ZOOPS, ANR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1191
504 Evaluation of Horizontal Seismic Hazard of Naghan, Iran

Authors: S. A. Razavian Amrei, G.Ghodrati Amiri, D. Rezaei

Abstract:

This paper presents probabilistic horizontal seismic hazard assessment of Naghan, Iran. It displays the probabilistic estimate of Peak Ground Horizontal Acceleration (PGHA) for the return period of 475, 950 and 2475 years. The output of the probabilistic seismic hazard analysis is based on peak ground acceleration (PGA), which is the most common criterion in designing of buildings. A catalogue of seismic events that includes both historical and instrumental events was developed and covers the period from 840 to 2009. The seismic sources that affect the hazard in Naghan were identified within the radius of 200 km and the recurrence relationships of these sources were generated by Kijko and Sellevoll. Finally Peak Ground Horizontal Acceleration (PGHA) has been prepared to indicate the earthquake hazard of Naghan for different hazard levels by using SEISRISK III software.

Keywords: Seismic Hazard Assessment, Seismicity Parameters, PGA, Naghan, Iran

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
503 Traffic Flow Prediction using Adaboost Algorithm with Random Forests as a Weak Learner

Authors: Guy Leshem, Ya'acov Ritov

Abstract:

Traffic Management and Information Systems, which rely on a system of sensors, aim to describe in real-time traffic in urban areas using a set of parameters and estimating them. Though the state of the art focuses on data analysis, little is done in the sense of prediction. In this paper, we describe a machine learning system for traffic flow management and control for a prediction of traffic flow problem. This new algorithm is obtained by combining Random Forests algorithm into Adaboost algorithm as a weak learner. We show that our algorithm performs relatively well on real data, and enables, according to the Traffic Flow Evaluation model, to estimate and predict whether there is congestion or not at a given time on road intersections.

Keywords: Machine Learning, Boosting, Classification, TrafficCongestion, Data Collecting, Magnetic Loop Detectors, SignalizedIntersections, Traffic Signal Timing Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3879
502 A New Approach to Design Policies for the Adoption of Alternative Fuel-Technology Powertrains

Authors: Reza Fazeli, Vitor Leal, Jorge Pinho de Sousa

Abstract:

Planning the transition period for the adoption of alternative fuel-technology powertrains is a challenging task that requires sophisticated analysis tools. In this study, a system dynamic approach was applied to analyze the bi-directional interaction between the development of the refueling station network and vehicle sales. Besides, the developed model was used to estimate the transition cost to reach a predefined target (share of alternative fuel vehicles) in different scenarios. Several scenarios have been analyzed to investigate the effectiveness and cost of incentives on the initial price of vehicles, and on the evolution of fuel and refueling stations. Obtained results show that a combined set of incentives will be more effective than just a single specific type of incentives.

Keywords: adoption of Alternative Fuel Vehicles, System Dynamic Analysis, Plug-in Hybrid Vehicles

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
501 Modeling and Simulation of Position Estimation of Switched Reluctance Motor with Artificial Neural Networks

Authors: Oguz Ustun, Erdal Bekiroglu

Abstract:

In the present study, position estimation of switched reluctance motor (SRM) has been achieved on the basis of the artificial neural networks (ANNs). The ANNs can estimate the rotor position without using an extra rotor position sensor by measuring the phase flux linkages and phase currents. Flux linkage-phase current-rotor position data set and supervised backpropagation learning algorithm are used in training of the ANN based position estimator. A 4-phase SRM have been used to verify the accuracy and feasibility of the proposed position estimator. Simulation results show that the proposed position estimator gives precise and accurate position estimations for both under the low and high level reference speeds of the SRM

Keywords: Artificial neural networks, modeling andsimulation, position observer, switched reluctance motor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034
500 An Improved Scheduling Strategy in Cloud Using Trust Based Mechanism

Authors: D. Sumathi, P. Poongodi

Abstract:

Cloud Computing refers to applications delivered as services over the internet, and the datacenters that provide those services with hardware and systems software. These were earlier referred to as Software as a Service (SaaS). Scheduling is justified by job components (called tasks), lack of information. In fact, in a large fraction of jobs from machine learning, bio-computing, and image processing domains, it is possible to estimate the maximum time required for a task in the job. This study focuses on Trust based scheduling to improve cloud security by modifying Heterogeneous Earliest Finish Time (HEFT) algorithm. It also proposes TR-HEFT (Trust Reputation HEFT) which is then compared to Dynamic Load Scheduling.

Keywords: Software as a Service (SaaS), Trust, Heterogeneous Earliest Finish Time (HEFT) algorithm, Dynamic Load Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2176
499 Image Modeling Using Gibbs-Markov Random Field and Support Vector Machines Algorithm

Authors: Refaat M Mohamed, Ayman El-Baz, Aly A. Farag

Abstract:

This paper introduces a novel approach to estimate the clique potentials of Gibbs Markov random field (GMRF) models using the Support Vector Machines (SVM) algorithm and the Mean Field (MF) theory. The proposed approach is based on modeling the potential function associated with each clique shape of the GMRF model as a Gaussian-shaped kernel. In turn, the energy function of the GMRF will be in the form of a weighted sum of Gaussian kernels. This formulation of the GMRF model urges the use of the SVM with the Mean Field theory applied for its learning for estimating the energy function. The approach has been tested on synthetic texture images and is shown to provide satisfactory results in retrieving the synthesizing parameters.

Keywords: Image Modeling, MRF, Parameters Estimation, SVM Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
498 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psychodiagnostic solution. The clinicians can draw objective decisions and for the patients: it does not take too much time and energy, it does not bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3618
497 Bail-in Capital: The New Box

Authors: Manu Krishnan, Phil Jacoby

Abstract:

In this paper, we discuss the paradigm shift in bank capital from the “gone concern" to the “going concern" mindset. We then propose a methodology for pricing a product of this shift called Contingent Capital Notes (“CoCos"). The Merton Model can determine a price for credit risk by using the firm-s equity value as a call option on those assets. Our pricing methodology for CoCos also uses the credit spread implied by the Merton Model in a subsequent derivative form created by John Hull et al . Here, a market implied asset volatility is calculated by using observed market CDS spreads. This implied asset volatility is then used to estimate the probability of triggering a predetermined “contingency event" given the distanceto- trigger (DTT). The paper then investigates the effect of varying DTTs and recovery assumptions on the CoCo yield. We conclude with an investment rationale.

Keywords: CoCo, Contingent capital, Bank Capital, Tier1 Capital

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
496 Evaluating Spectral Relationships between Signals by Removing the Contribution of a Common, Periodic Source A Partial Coherence-based Approach

Authors: Antonio Mauricio F. L. Miranda de Sá

Abstract:

Partial coherence between two signals removing the contribution of a periodic, deterministic signal is proposed for evaluating the interrelationship in multivariate systems. The estimator expression was derived and shown to be independent of such periodic signal. Simulations were used for obtaining its critical value, which were found to be the same as those for Gaussian signals, as well as for evaluating the technique. An Illustration with eletroencephalografic (EEG) signals during photic stimulation is also provided. The application of the proposed technique in both simulation and real EEG data indicate that it seems to be very specific in removing the contribution of periodic sources. The estimate independence of the periodic signal may widen partial coherence application to signal analysis, since it could be used together with simple coherence to test for contamination in signals by a common, periodic noise source.

Keywords: Partial coherence, periodic input, spectral analysis, statistical signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
495 Parameter Estimation for Viewing Rank Distribution of Video-on-Demand

Authors: Hyoup-Sang Yoon

Abstract:

Video-on-demand (VOD) is designed by using content delivery networks (CDN) to minimize the overall operational cost and to maximize scalability. Estimation of the viewing pattern (i.e., the relationship between the number of viewings and the ranking of VOD contents) plays an important role in minimizing the total operational cost and maximizing the performance of the VOD systems. In this paper, we have analyzed a large body of commercial VOD viewing data and found that the viewing rank distribution fits well with the parabolic fractal distribution. The weighted linear model fitting function is used to estimate the parameters (coefficients) of the parabolic fractal distribution. This paper presents an analytical basis for designing an optimal hierarchical VOD contents distribution system in terms of its cost and performance.

Keywords: VOD, CDN, parabolic fractal distribution, viewing rank, weighted linear model fitting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
494 Validation of the WAsP Model for a Terrain Surrounded by Mountainous Region

Authors: Mohammadamin Zanganeh, Vahid Khalajzadeh

Abstract:

The problems associated with wind predictions of WAsP model in complex terrain are already the target of several studies in the last decade. In this paper, the influence of surrounding orography on accuracy of wind data analysis of a train is investigated. For the case study, a site with complex surrounding orography is considered. This site is located in Manjil, one of the windiest cities of Iran. For having precise evaluation of wind regime in the site, one-year wind data measurements from two metrological masts are used. To validate the obtained results from WAsP, the cross prediction between each mast is performed. The analysis reveals that WAsP model can estimate the wind speed behavior accurately. In addition, results show that this software can be used for predicting the wind regime in flat sites with complex surrounding orography.

Keywords: Complex terrain, Meteorological mast, WAsPmodel, Wind prediction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
493 Distinction between Manifestations of Diabetic Retinopathy and Dust Artifacts Using Three-Dimensional HSV Color Space

Authors: Naoto Suzuki

Abstract:

Many ophthalmologists find it difficult to distinguish between small retinal hemorrhages and dust artifacts when using fundus photography for the diagnosis of diabetic retinopathy. Six patients with diabetic retinopathy underwent fundus photography, which revealed dust artifacts in the photographs of some patients. We constructed an experimental device similar to the optical system of the fundus camera and colored the fundi of the artificial eyes with khaki, sunset, rose and sunflower colors. Using the experimental device, we photographed dust artifacts using each artificial eyes. We used Scilab 5.4.0 and SIVP 0.5.3 softwares to convert the red, green, and blue (RGB) color space to the hue, saturation, and value (HSV) color space. We calculated the differences between the areas of manifestations and perimanifestations and the areas of dust artifacts and periartifacts using average HSVs. The V values in HSV for the manifestations were as follows: hemorrhages, 0.06 ± 0.03; hard exudates, −0.12 ± 0.06; and photocoagulation marks, 0.07 ± 0.02. For dust artifacts, visualized in the human and artificial eyes, the V values were as follows: human eye, 0.19 ± 0.03; khaki, 0.41 ± 0.02; sunset, 0.43 ± 0.04; rose, 0.47 ± 0.11; and sunflower, 0.59 ± 0.07. For the human and artificial eyes, we calculated two sensitivity values of dust artifacts compared to manifestation areas. V values of the HSV color space enabled the differentiation of small hemorrhages, hard exudates, and photocoagulation marks from dust artifacts.

Keywords: Diabetic retinopathy, HSV color space, small hemorrhages, hard exudates, photocoagulation marks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1180
492 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
491 Comparison of the Parameter using ECG with Bisepctrum Parameter using EEG during General Anesthesia

Authors: Seong-wan Baik, Soo-young Ye, Byeong-cheol Choi, Gye-rok Jeon

Abstract:

The measurement of anesthetic depth is necessary in anesthesiology. NN10 is very simple method among the RR intervals analysis methods. NN10 parameter means the numbers of above the 10 ms intervals of the normal to normal RR intervals. Bispectrum analysis is defined as 2D FFT. EEG signal reflected the non-linear peristalsis phenomena according to the change brain function. After analyzing the bispectrum of the 2 dimension, the most significant power spectrum density peaks appeared abundantly at the specific area in awakening and anesthesia state. These points are utilized to create the new index since many peaks appeared at the specific area in the frequency coordinate. The measured range of an index was 0-100. An index is 20-50 at an anesthesia, while the index is 90-60 at the awake. In this paper, the relation between NN10 parameter using ECG and bisepctrum index using EEG is observed to estimate the depth of anesthesia during anesthesia and then we estimated the utility of the anesthetic.

Keywords: Anesthesia, Bispectrum index, ECG, EEG

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
490 Aliasing Free and Additive Error in Spectra for Alpha Stable Signals

Authors: R. Sabre

Abstract:

This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.

Keywords: Spectral density, stable processes, aliasing, p-adic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 558
489 Criticality Assessment of Failures in Multipoint Communication Networks

Authors: Myriam Noureddine, Rachid Noureddine

Abstract:

Following the current economic challenges and competition, all systems, whatever their field, must be efficient and operational during their activity. In this context, it is imperative to anticipate, identify, eliminate and estimate the failures of systems, which may lead to an interruption of their function. This need requires the management of possible risks, through an assessment of the failures criticality following a dependability approach. On the other hand, at the time of new information technologies and considering the networks field evolution, the data transmission has evolved towards a multipoint communication, which can simultaneously transmit information from a sender to multiple receivers. This article proposes the failures criticality assessment of a multipoint communication network, integrates a database of network failures and their quantifications. The proposed approach is validated on a case study and the final result allows having the criticality matrix associated with failures on the considered network, giving the identification of acceptable risks.

Keywords: Dependability, failure, multipoint network, criticality matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1587
488 Probabilistic Center Voting Method for Subsequent Object Tracking and Segmentation

Authors: Suryanto, Hyo-Kak Kim, Sang-Hee Park, Dae-Hwan Kim, Sung-Jea Ko

Abstract:

In this paper, we introduce a novel algorithm for object tracking in video sequence. In order to represent the object to be tracked, we propose a spatial color histogram model which encodes both the color distribution and spatial information. The object tracking from frame to frame is accomplished via center voting and back projection method. The center voting method has every pixel in the new frame to cast a vote on whereabouts the object center is. The back projection method segments the object from the background. The segmented foreground provides information on object size and orientation, omitting the need to estimate them separately. We do not put any assumption on camera motion; the proposed algorithm works equally well for object tracking in both static and moving camera videos.

Keywords: center voting, back projection, object tracking, size adaptation, non-stationary camera tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
487 Real Time Video Based Smoke Detection Using Double Optical Flow Estimation

Authors: Anton Stadler, Thorsten Ike

Abstract:

In this paper, we present a video based smoke detection algorithm based on TVL1 optical flow estimation. The main part of the algorithm is an accumulating system for motion angles and upward motion speed of the flow field. We optimized the usage of TVL1 flow estimation for the detection of smoke with very low smoke density. Therefore, we use adapted flow parameters and estimate the flow field on difference images. We show in theory and in evaluation that this improves the performance of smoke detection significantly. We evaluate the smoke algorithm using videos with different smoke densities and different backgrounds. We show that smoke detection is very reliable in varying scenarios. Further we verify that our algorithm is very robust towards crowded scenes disturbance videos.

Keywords: Low density, optical flow, upward smoke motion, video based smoke detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398
486 Performance Analysis of Cellular Wireless Network by Queuing Priority Handoff calls

Authors: Raj Kumar Samanta, Partha Bhattacharjee Gautam Sanyal

Abstract:

In this paper, a mathematical model is proposed to estimate the dropping probabilities of cellular wireless networks by queuing handoff instead of reserving guard channels. Usually, prioritized handling of handoff calls is done with the help of guard channel reservation. To evaluate the proposed model, gamma inter-arrival and general service time distributions have been considered. Prevention of some of the attempted calls from reaching to the switching center due to electromagnetic propagation failure or whimsical user behaviour (missed call, prepaid balance etc.), make the inter-arrival time of the input traffic to follow gamma distribution. The performance is evaluated and compared with that of guard channel scheme.

Keywords: Cellular wireless networks, non-classical traffic, mathematicalmodel, guard channel, queuing, handoff.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2371
485 Numerical Simulation of Lightning Strike Direct Effects on Aircraft Skin Composite Laminate

Authors: Muhammad Khalil, Nader Abuelfoutouh, Gasser Abdelal, Adrian Murphy

Abstract:

Nowadays, the direct effects of lightning to aircrafts are of great importance because of the massive use of composite materials. In comparison with metallic materials, composites present several weaknesses for lightning strike direct effects. Especially, their low electrical and thermal conductivities lead to severe lightning strike damage. The lightning strike direct effects are burning, heating, magnetic force, sparking and arcing. As the problem is complex, we investigated it gradually. A magnetohydrodynamics (MHD) model is developed to simulate the lightning strikes in order to estimate the damages on the composite materials. Then, a coupled thermal-electrical finite element analysis is used to study the interaction between the lightning arc and the composite laminate and to investigate the material degradation.

Keywords: Composite structures, lightning multiphysics, magnetohydrodynamics, coupled thermal-electrical analysis, thermal plasmas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2567
484 On Bayesian Analysis of Failure Rate under Topp Leone Distribution using Complete and Censored Samples

Authors: N. Feroze, M. Aslam

Abstract:

The article is concerned with analysis of failure rate (shape parameter) under the Topp Leone distribution using a Bayesian framework. Different loss functions and a couple of noninformative priors have been assumed for posterior estimation. The posterior predictive distributions have also been derived. A simulation study has been carried to compare the performance of different estimators. A real life example has been used to illustrate the applicability of the results obtained. The findings of the study suggest  that the precautionary loss function based on Jeffreys prior and singly type II censored samples can effectively be employed to obtain the Bayes estimate of the failure rate under Topp Leone distribution.

Keywords: loss functions, type II censoring, posterior distribution, Bayes estimators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2530
483 A Dynamic Hybrid Option Pricing Model by Genetic Algorithm and Black- Scholes Model

Authors: Yi-Chang Chen, Shan-Lin Chang, Chia-Chun Wu

Abstract:

Unlike this study focused extensively on trading behavior of option market, those researches were just taken their attention to model-driven option pricing. For example, Black-Scholes (B-S) model is one of the most famous option pricing models. However, the arguments of B-S model are previously mentioned by some pricing models reviewing. This paper following suggests the importance of the dynamic character for option pricing, which is also the reason why using the genetic algorithm (GA). Because of its natural selection and species evolution, this study proposed a hybrid model, the Genetic-BS model which combining GA and B-S to estimate the price more accurate. As for the final experiments, the result shows that the output estimated price with lower MAE value than the calculated price by either B-S model or its enhanced one, Gram-Charlier garch (G-C garch) model. Finally, this work would conclude that the Genetic-BS pricing model is exactly practical.

Keywords: genetic algorithm, Genetic-BS, option pricing model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2210
482 The Use of Degradation Measures to Design Reliability Test Plans

Authors: Stephen V. Crowder, Jonathan W. Lane

Abstract:

With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. In this work we present a case study involving an electronic component subject to degradation. The data, consisting of 42 degradation paths of cycles to failure, are first used to estimate a reliability function. Bootstrapping techniques are then used to perform power studies and develop a minimal reliability test plan for future production of this component. 

Keywords: Degradation Measure, Time to Failure Distribution, Bootstrap.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
481 Model-Based Small Area Estimation with Application to Unemployment Estimates

Authors: Hichem Omrani, Philippe Gerber, Patrick Bousch

Abstract:

The problem of Small Area Estimation (SAE) is complex because of various information sources and insufficient data. In this paper, an approach for SAE is presented for decision-making at national, regional and local level. We propose an Empirical Best Linear Unbiased Predictor (EBLUP) as an estimator in order to combine several information sources to evaluate various indicators. First, we present the urban audit project and its environmental, social and economic indicators. Secondly, we propose an approach for decision making in order to estimate indicators. An application is used to validate the theoretical proposal. Finally, a decision support system is presented based on open-source environment.

Keywords: Small area estimation, statistical method, sampling, empirical best linear unbiased predictor (EBLUP), decision-making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
480 Nonlinear Modeling of the PEMFC Based On NNARX Approach

Authors: Shan-Jen Cheng, Te-Jen Chang, Kuang-Hsiung Tan, Shou-Ling Kuo

Abstract:

Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accuracy of NNARX model are tested by one step ahead relating output voltage to input current from measured experimental of PEMFC. The results show that the obtained nonlinear NNARX model can efficiently approximate the dynamic mode of the PEMFC and model output and system measured output consistently.

Keywords: PEMFC, neural network, nonlinear identification, NNARX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174
479 Analyzing the Factors Effecting the Passenger Car Breakdowns using Com-Poisson GLM

Authors: N. Mamode Khan, V. Jowaheer

Abstract:

Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observations as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use quasi-likelihood estimation approach to estimate the parameters of the model. Under-dispersion parameter is estimated to be 2.14 justifying the appropriateness of Com-Poisson distribution in modelling under-dispersed count responses recorded in this study.

Keywords: Breakdowns, under-dispersion, com-poisson, generalized linear model, quasi-likelihood estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
478 Deterministic Method to Assess Kalman Filter Passive Ranging Solution Reliability

Authors: Ronald M. Yannone

Abstract:

For decades, the defense business has been plagued by not having a reliable, deterministic method to know when the Kalman filter solution for passive ranging application is reliable for use by the fighter pilot. This has made it hard to accurately assess when the ranging solution can be used for situation awareness and weapons use. To date, we have used ad hoc rules-of-thumb to assess when we think the estimate of the Kalman filter standard deviation on range is reliable. A reliable algorithm has been developed at BAE Systems Electronics & Integrated Solutions that monitors the Kalman gain matrix elements – and a patent is pending. The “settling" of the gain matrix elements relates directly to when we can assess the time when the passive ranging solution is within the 10 percent-of-truth value. The focus of the paper is on surface-based passive ranging – but the method is applicable to airborne targets as well.

Keywords: Electronic warfare, extended Kalman filter (EKF), fighter aircraft, passive ranging, track convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2045
477 The Role of Heat Pumps for the Decarbonization of European Regions

Authors: D. M. Mongelli, M. De Carli, L. Carnieletto, F. Busato

Abstract:

This research aims to provide a contribution to the reduction of fossil fuels and the consequent reduction of CO2eq emissions for each European region. Simulations have been carried out to replace fossil fuel fired heating boilers with air-to-water heat pumps, when allowed by favorable environmental conditions (outdoor temperature, water temperature in emission systems, etc.). To estimate the potential coverage of high-temperature heat pumps in European regions, the energy profiles of buildings were considered together with the potential coefficient of performance (COP) of heat pumps operating with flow temperature with variable climatic regulation. The electrification potential for heating buildings was estimated by dividing the 38 European countries examined into 179 territorial units. The yields have been calculated in terms of energy savings and CO2eq reduction.

Keywords: Decarbonization, Space heating, Heat pumps, Energy policies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147