Search results for: block typechannel estimation.
1364 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation
Authors: S. K. Pillai, M. K. Jeyakumar
Abstract:
Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.
Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20451363 Performance Analysis of Quantum Cascaded Lasers
Authors: M. B. El_Mashade, I. I. Mahamoud, M. S. El_Tokhy
Abstract:
Improving the performance of the QCL through block diagram as well as mathematical models is the main scope of this paper. In order to enhance the performance of the underlined device, the mathematical model parameters are used in a reliable manner in such a way that the optimum behavior was achieved. These parameters play the central role in specifying the optical characteristics of the considered laser source. Moreover, it is important to have a large amount of radiated power, where increasing the amount of radiated power represents the main hopping process that can be predicted from the behavior of quantum laser devices. It was found that there is a good agreement between the calculated values from our mathematical model and those obtained with VisSim and experimental results. These demonstrate the strength of mplementation of both mathematical and block diagram models.
Keywords: Quantum Cascaded Lasers (QCLs), Modeling, Block Diagram Programming, Intersubband transitions
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14461362 H∞ State Estimation of Neural Networks with Discrete and Distributed Delays
Abstract:
In this paper, together with some improved Lyapunov-Krasovskii functional and effective mathematical techniques, several sufficient conditions are derived to guarantee the error system is globally asymptotically stable with H∞ performance, in which both the time-delay and its time variation can be fully considered. In order to get less conservative results of the state estimation condition, zero equalities and reciprocally convex approach are employed. The estimator gain matrix can be obtained in terms of the solution to linear matrix inequalities. A numerical example is provided to illustrate the usefulness and effectiveness of the obtained results.
Keywords: H∞ performance, Neural networks, State estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14471361 A Discrete Filtering Algorithm for Impulse Wave Parameter Estimation
Authors: Khaled M. EL-Naggar
Abstract:
This paper presents a new method for estimating the mean curve of impulse voltage waveforms that are recorded during impulse tests. In practice, these waveforms are distorted by noise, oscillations and overshoot. The problem is formulated as an estimation problem. Estimation of the current signal parameters is achieved using a fast and accurate technique. The method is based on discrete dynamic filtering algorithm (DDF). The main advantage of the proposed technique is its ability in producing the estimates in a very short time and at a very high degree of accuracy. The algorithm uses sets of digital samples of the recorded impulse waveform. The proposed technique has been tested using simulated data of practical waveforms. Effects of number of samples and data window size are studied. Results are reported and discussed.
Keywords: Digital Filtering, Estimation, Impulse wave, Stochastic filtering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18511360 A New Criterion Pose and Shape of Objects for Collision Risk Estimation
Authors: Do Hyeung Kim, Dae Hee Seo, Byung Doo Kim, Byung Gil Lee
Abstract:
As many recent researches being implemented in aviation and maritime aspects, strong doubts have been raised concerning the reliability of the estimation of collision risk. It is shown that using position and velocity of objects can lead to imprecise results. In this paper, therefore, a new approach to the estimation of collision risks using pose and shape of objects is proposed. Simulation results are presented validating the accuracy of the new criterion to adapt to collision risk algorithm based on fuzzy logic.
Keywords: Collision risk, Pose and shape, Fuzzy logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19111359 Estimation of Load Impedance in Presence of Harmonics
Authors: Khaled M. EL-Naggar
Abstract:
This paper presents a fast and efficient on-line technique for estimating impedance of unbalanced loads in power systems. The proposed technique is an application of a discrete timedynamic filter based on stochastic estimation theory which is suitable for estimating parameters in noisy environment. The algorithm uses sets of digital samples of the distorted voltage and current waveforms of the non-linear load to estimate the harmonic contents of these two signal. The non-linear load impedance is then calculated from these contents. The method is tested using practical data. Results are reported and compared with those obtained using the conventional least error squares technique. In addition to the very accurate results obtained, the method can detect and reject bad measurements. This can be considered as a very important advantage over the conventional static estimation methods such as the least error square method.
Keywords: Estimation, identification, Harmonics, Dynamic Filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20661358 Parameters Estimation of Double Diode Solar Cell Model
Authors: M. R. AlRashidi, K. M. El-Naggar, M. F. AlHajri
Abstract:
A new technique based on Pattern search optimization is proposed for estimating different solar cell parameters in this paper. The estimated parameters are the generated photocurrent, saturation current, series resistance, shunt resistance, and ideality factor. The proposed approach is tested and validated using double diode model to show its potential. Performance of the developed approach is quite interesting which signifies its potential as a promising estimation tool.
Keywords: Solar Cell, Parameter Estimation, Pattern Search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59921357 A Preliminary Study for Design of Automatic Block Reallocation Algorithm with Genetic Algorithm Method in the Land Consolidation Projects
Authors: Tayfun Çay, Yaşar İnceyol, Abdurrahman Özbeyaz
Abstract:
Land reallocation is one of the most important steps in land consolidation projects. Many different models were proposed for land reallocation in the literature such as Fuzzy Logic, block priority based land reallocation and Spatial Decision Support Systems. A model including four parts is considered for automatic block reallocation with genetic algorithm method in land consolidation projects. These stages are preparing data tables for a project land, determining conditions and constraints of land reallocation, designing command steps and logical flow chart of reallocation algorithm and finally writing program codes of Genetic Algorithm respectively. In this study, we designed the first three steps of the considered model comprising four steps.Keywords: Genetic algorithm, land consolidation, landholding, land reallocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19111356 A Fuzzy Tumor Volume Estimation Approach Based On Fuzzy Segmentation of MR Images
Authors: Sara A.Yones, Ahmed S. Moussa
Abstract:
Quantitative measurements of tumor in general and tumor volume in particular, become more realistic with the use of Magnetic Resonance imaging, especially when the tumor morphological changes become irregular and difficult to assess by clinical examination. However, tumor volume estimation strongly depends on the image segmentation, which is fuzzy by nature. In this paper a fuzzy approach is presented for tumor volume segmentation based on the fuzzy connectedness algorithm. The fuzzy affinity matrix resulting from segmentation is then used to estimate a fuzzy volume based on a certainty parameter, an Alpha Cut, defined by the user. The proposed method was shown to highly affect treatment decisions. A statistical analysis was performed in this study to validate the results based on a manual method for volume estimation and the importance of using the Alpha Cut is further explained.
Keywords: Alpha Cut, Fuzzy Connectedness, Magnetic Resonance Imaging, Tumor volume estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24011355 Simultaneous Term Structure Estimation of Hazard and Loss Given Default with a Statistical Model using Credit Rating and Financial Information
Authors: Tomohiro Ando, Satoshi Yamashita
Abstract:
The objective of this study is to propose a statistical modeling method which enables simultaneous term structure estimation of the risk-free interest rate, hazard and loss given default, incorporating the characteristics of the bond issuing company such as credit rating and financial information. A reduced form model is used for this purpose. Statistical techniques such as spline estimation and Bayesian information criterion are employed for parameter estimation and model selection. An empirical analysis is conducted using the information on the Japanese bond market data. Results of the empirical analysis confirm the usefulness of the proposed method.Keywords: Empirical Bayes, Hazard term structure, Loss given default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16701354 Study on Scheduling of the Planning Method Using the Web-based Visualization System in a Shipbuilding Block Assembly Shop
Authors: A. Eui Koog Ahn, B. Gi-Nam Wang, C. Sang C. Park
Abstract:
Higher productivity and less cost in the ship manufacturing process are required to maintain the international competitiveness of morden manufacturing industries. In shipbuilding, however, the Engineering To Order (ETO) production method and production process is very difficult. Thus, designs change frequently. In accordance with production, planning should be set up according to scene changes. Therefore, fixed production planning is very difficult. Thus, a scheduler must first make sketchy plans, then change the plans based on the work progress and modifications. Thus, data sharing in a shipbuilding block assembly shop is very important. In this paper, we proposed to scheduling method applicable to the shipbuilding industry and decision making support system through web based visualization system.Keywords: Shipbuilding, Monitoring, Block assembly shop, Visualization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20671353 An Optimized Multi-block Method for Turbulent Flows
Authors: M. Goodarzi, P. Lashgari
Abstract:
A major part of the flow field involves no complicated turbulent behavior in many turbulent flows. In this research work, in order to reduce required memory and CPU time, the flow field was decomposed into several blocks, each block including its special turbulence. A two dimensional backward facing step was considered here. Four combinations of the Prandtl mixing length and standard k- E models were implemented as well. Computer memory and CPU time consumption in addition to numerical convergence and accuracy of the obtained results were mainly investigated. Observations showed that, a suitable combination of turbulence models in different blocks led to the results with the same accuracy as the high order turbulence model for all of the blocks, in addition to the reductions in memory and CPU time consumption.Keywords: Computer memory, CPU time, Multi-block method, Turbulence modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15671352 Sensitivity Analysis for Direction of Arrival Estimation Using Capon and Music Algorithms in Mobile Radio Environment
Authors: Mustafa Abdalla, Khaled A. Madi, Rajab Farhat
Abstract:
An array antenna system with innovative signal processing can improve the resolution of a source direction of arrival (DoA) estimation. High resolution techniques take the advantage of array antenna structures to better process the incoming waves. They also have the capability to identify the direction of multiple targets. This paper investigates performance of the DOA estimation algorithm namely; Capon and MUSIC on the uniform linear array (ULA). The simulation results show that in Capon and MUSIC algorithm the resolution of the DOA techniques improves as number of snapshots, number of array elements, signal-to-noise ratio and separation angle between the two sources θ increases.Keywords: Antenna array, Capon, MUSIC, Direction-of-arrival estimation, signal processing, uniform linear arrays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27361351 Two Lessons Learnt in Defining Intersections and Interfaces in Numerical Modeling with Plaxis
Authors: Mahdi Sadeghian, Somaye Sadeghian, Reza Dinarvand
Abstract:
This paper is going to discuss two issues encountered in using PLAXIS. Both issues were monitored during application of PLAXIS to estimate the excavation-induced displacement. Column Soil Mixing (CSM) was applied to stabilise the excavation. It was understood that the estimated excavation induced deformation at the top of the CSM blocks highly depends on the material type defining pavement material adjacent to the CSM blocks. Cohesive material for pavement will result in the unrealistic connection between pavement and CSM even by defining an interface element. To find the most realistic approach, the interface defined in three different manners (1) no interface elements were applied (2) a non-cohesive soil layer was defined between pavement and CSM block to represent the friction between these materials (3) built-in interface elements in PLAXIS was used to define the boundary between the pavement and the CSM block. The result showed that the option 2 would result in more realistic results. The second issue was in the modelling of the contact line between the CSM block and an inclined layer underneath. The analysis result showed that the excavation-induced deformation highly depends on how the PLAXIS user defines the contact area. It was understood that if the contact area had defined as a point in which CSM block had intersected the layer underneath the estimated lateral displacement of CSM block would be unrealistically lower than the model in which the contact area was defined as a line.
Keywords: PLAXIS, FEM, CSM, excavation-induced deformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6401350 Effect of Soil Corrosion in Failures of Buried Gas Pipelines
Authors: Saima Ali, Pathamanathan Rajeev, Imteaz A. Monzur
Abstract:
In this paper, a brief review of the corrosion mechanism in buried pipe and modes of failure is provided together with the available corrosion models. Moreover, the sensitivity analysis is performed to understand the influence of corrosion model parameters on the remaining life estimation. Further, the probabilistic analysis is performed to propagate the uncertainty in the corrosion model on the estimation of the renaming life of the pipe. Finally, the comparison among the corrosion models on the basis of the remaining life estimation will be provided to improve the renewal plan.
Keywords: Corrosion, pit depth, sensitivity analysis, exposure period.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17311349 EML-Estimation of Multivariate t Copulas with Heuristic Optimization
Authors: Jin Zhang, Wing Lon Ng
Abstract:
In recent years, copulas have become very popular in financial research and actuarial science as they are more flexible in modelling the co-movements and relationships of risk factors as compared to the conventional linear correlation coefficient by Pearson. However, a precise estimation of the copula parameters is vital in order to correctly capture the (possibly nonlinear) dependence structure and joint tail events. In this study, we employ two optimization heuristics, namely Differential Evolution and Threshold Accepting to tackle the parameter estimation of multivariate t distribution models in the EML approach. Since the evolutionary optimizer does not rely on gradient search, the EML approach can be applied to estimation of more complicated copula models such as high-dimensional copulas. Our experimental study shows that the proposed method provides more robust and more accurate estimates as compared to the IFM approach.Keywords: Copula Models, Student t Copula, Parameter Inference, Differential Evolution, Threshold Accepting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15621348 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation
Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz
Abstract:
Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with successKeywords: Software Metrics, Software Cost Estimation, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19641347 Bayesian Inference for Phase Unwrapping Using Conjugate Gradient Method in One and Two Dimensions
Authors: Yohei Saika, Hiroki Sakaematsu, Shota Akiyama
Abstract:
We investigated statistical performance of Bayesian inference using maximum entropy and MAP estimation for several models which approximated wave-fronts in remote sensing using SAR interferometry. Using Monte Carlo simulation for a set of wave-fronts generated by assumed true prior, we found that the method of maximum entropy realized the optimal performance around the Bayes-optimal conditions by using model of the true prior and the likelihood representing optical measurement due to the interferometer. Also, we found that the MAP estimation regarded as a deterministic limit of maximum entropy almost achieved the same performance as the Bayes-optimal solution for the set of wave-fronts. Then, we clarified that the MAP estimation perfectly carried out phase unwrapping without using prior information, and also that the MAP estimation realized accurate phase unwrapping using conjugate gradient (CG) method, if we assumed the model of the true prior appropriately.
Keywords: Bayesian inference using maximum entropy, MAP estimation using conjugate gradient method, SAR interferometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17531346 Comparative Analysis of the Software Effort Estimation Models
Authors: Jaswinder Kaur, Satwinder Singh, Karanjeet Singh Kahlon
Abstract:
Accurate software cost estimates are critical to both developers and customers. They can be used for generating request for proposals, contract negotiations, scheduling, monitoring and control. The exact relationship between the attributes of the effort estimation is difficult to establish. A neural network is good at discovering relationships and pattern in the data. So, in this paper a comparative analysis among existing Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model and Neural Network Based Model is performed. Neural Network has outperformed the other considered models. Hence, we proposed Neural Network system as a soft computing approach to model the effort estimation of the software systems.Keywords: Effort Estimation, Neural Network, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22261345 Spread Spectrum Code Estimationby Particle Swarm Algorithm
Authors: Vahid R. Asghari, Mehrdad Ardebilipour
Abstract:
In the context of spectrum surveillance, a new method to recover the code of spread spectrum signal is presented, while the receiver has no knowledge of the transmitter-s spreading sequence. In our previous paper, we used Genetic algorithm (GA), to recover spreading code. Although genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems, but nonetheless, by increasing the length of the code, we will often lead to an unacceptable slow convergence speed. To solve this problem we introduce Particle Swarm Optimization (PSO) into code estimation in spread spectrum communication system. In searching process for code estimation, the PSO algorithm has the merits of rapid convergence to the global optimum, without being trapped in local suboptimum, and good robustness to noise. In this paper we describe how to implement PSO as a component of a searching algorithm in code estimation. Swarm intelligence boasts a number of advantages due to the use of mobile agents. Some of them are: Scalability, Fault tolerance, Adaptation, Speed, Modularity, Autonomy, and Parallelism. These properties make swarm intelligence very attractive for spread spectrum code estimation. They also make swarm intelligence suitable for a variety of other kinds of channels. Our results compare between swarm-based algorithms and Genetic algorithms, and also show PSO algorithm performance in code estimation process.Keywords: Code estimation, Particle Swarm Optimization(PSO), Spread spectrum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21391344 Low-Complexity Channel Estimation Algorithm for MIMO-OFDM Systems
Authors: Ali Beydoun, Hamzé H. Alaeddine
Abstract:
One of the main challenges in MIMO-OFDM system to achieve the expected performances in terms of data rate and robustness against multi-path fading channels is the channel estimation. Several methods were proposed in the literature based on either least square (LS) or minimum mean squared error (MMSE) estimators. These methods present high implementation complexity as they require the inversion of large matrices. In order to overcome this problem and to reduce the complexity, this paper presents a solution that benefits from the use of the STBC encoder and transforms the channel estimation process into a set of simple linear operations. The proposed method is evaluated via simulation in AWGN-Rayleigh fading channel. Simulation results show a maximum reduction of 6.85% of the bit error rate (BER) compared to the one obtained with the ideal case where the receiver has a perfect knowledge of the channel.Keywords: Channel estimation, MIMO, OFDM, STBC, CAZAC sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8821343 Bootstrap Confidence Intervals and Parameter Estimation for Zero Inflated Strict Arcsine Model
Authors: Y. N. Phang, E. F. Loh
Abstract:
Zero inflated Strict Arcsine model is a newly developed model which is found to be appropriate in modeling overdispersed count data. In this study, maximum likelihood estimation method is used in estimating the parameters for zero inflated strict arcsine model. Bootstrapping is then employed to compute the confidence intervals for the estimated parameters.
Keywords: overdispersed count data, maximum likelihood estimation, simulated annealing, BCa confidence intervals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22831342 Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station
Authors: Siti Aisyah Zakaria, Nor Azrita Mohd Amin, Noor Fadhilah Ahmad Radi, Nasrul Hamidin
Abstract:
Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.
Keywords: Extreme value theory, generalized extreme value distribution, ground-level ozone, return level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5231341 Effective Class of Discreet Programing Problems
Authors: Kaziyev G. Z., Nabiyeva G. S., Kalizhanova A.U.
Abstract:
We consider herein a concise view of discreet programming models and methods. There has been conducted the models and methods analysis. On the basis of discreet programming models there has been elaborated and offered a new class of problems, i.e. block-symmetry models and methods of applied tasks statements and solutions.Keywords: Discreet programming, block-symmetry, analysis methods, information systems development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13491340 Performances Comparison of Neural Architectures for On-Line Speed Estimation in Sensorless IM Drives
Authors: K.Sedhuraman, S.Himavathi, A.Muthuramalingam
Abstract:
The performance of sensor-less controlled induction motor drive depends on the accuracy of the estimated speed. Conventional estimation techniques being mathematically complex require more execution time resulting in poor dynamic response. The nonlinear mapping capability and powerful learning algorithms of neural network provides a promising alternative for on-line speed estimation. The on-line speed estimator requires the NN model to be accurate, simpler in design, structurally compact and computationally less complex to ensure faster execution and effective control in real time implementation. This in turn to a large extent depends on the type of Neural Architecture. This paper investigates three types of neural architectures for on-line speed estimation and their performance is compared in terms of accuracy, structural compactness, computational complexity and execution time. The suitable neural architecture for on-line speed estimation is identified and the promising results obtained are presented.Keywords: Sensorless IM drives, rotor speed estimators, artificial neural network, feed- forward architecture, single neuron cascaded architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14601339 Estimation of Component Reusability through Reusability Metrics
Authors: Aditya Pratap Singh, Pradeep Tomar
Abstract:
Software reusability is an essential characteristic of Component-Based Software (CBS). The component reusability is an important assess for the effective reuse of components in CBS. The attributes of reusability proposed by various researchers are studied and four of them are identified as potential factors affecting reusability. This paper proposes metric for reusability estimation of black-box software component along with metrics for Interface Complexity, Understandability, Customizability and Reliability. An experiment is performed for estimation of reusability through a case study on a sample web application using a real world component.
Keywords: Component-based software, component reusability, customizability, interface complexity, reliability, understandability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30621338 Distributed Estimation Using an Improved Incremental Distributed LMS Algorithm
Authors: Amir Rastegarnia, Mohammad Ali Tinati, Azam Khalili
Abstract:
In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.
Keywords: Distributes estimation, sensor networks, adaptive filter, IDLMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14481337 Development of Cooling Demand by Computerize
Authors: Bobby Anak John, Zamri Noranai, Md. Norrizam Mohmad Jaat, Hamidon Salleh, Mohammad Zainal Md Yusof
Abstract:
Air conditioning is mainly use as human comfort cooling medium. It use more in high temperatures are country such as Malaysia. Proper estimation of cooling load will archive ideal temperature. Without proper estimation can lead to over estimation or under estimation. The ideal temperature should be comfort enough. This study is to develop a program to calculate an ideal cooling load demand, which is match with heat gain. Through this study, it is easy to calculate cooling load estimation. Objective of this study are to develop user-friendly and easy excess cooling load program. This is to insure the cooling load can be estimate by any of the individual rather than them using rule-of-thumb. Developed software is carryout by using Matlab-GUI. These developments are only valid for common building in Malaysia only. An office building was select as case study to verify the applicable and accuracy of develop software. In conclusion, the main objective has successfully where developed software is user friendly and easily to estimate cooling load demand.Keywords: Cooling Load, Heat Gain, Building and GUI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20451336 Learning Block Memories with Metric Networks
Authors: Mario Gonzalez, David Dominguez, Francisco B. Rodriguez
Abstract:
An attractor neural network on the small-world topology is studied. A learning pattern is presented to the network, then a stimulus carrying local information is applied to the neurons and the retrieval of block-like structure is investigated. A synaptic noise decreases the memory capability. The change of stability from local to global attractors is shown to depend on the long-range character of the network connectivity.Keywords: Hebbian learning, image recognition, small world, spatial information.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18671335 Craniometric Analysis of Foramen Magnum for Estimation of Sex
Authors: Tanuj Kanchan, Anadi Gupta, Kewal Krishan
Abstract:
Human skull is shown to exhibit numerous sexually dimorphic traits. Estimation of sex is a challenging task especially when a part of skull is brought for medicolegal investigation. The present research was planned to evaluate the sexing potential of the dimensions of foramen magnum in forensic identification by craniometric analysis. Length and breadth of the foramen magnum was measured using Vernier calipers and the area of foramen magnum was calculated. The length, breadth, and area of foramen magnum were found to be larger in males than females. Sexual dimorphism index was calculated to estimate the sexing potential of each variable. The study observations are suggestive of the limited utility of the craniometric analysis of foramen magnum during the examination of skull and its parts in estimation of sex.
Keywords: Forensic Anthropology, Skeletal remains, Identification, Sex estimation, Foramen magnum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3291