Search results for: interval estimation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1288

Search results for: interval estimation.

1108 Aliasing Free and Additive Error in Spectra for Alpha Stable Signals

Authors: R. Sabre

Abstract:

This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.

Keywords: Spectral density, stable processes, aliasing, p-adic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 585
1107 System-Level Energy Estimation for SoC based on the Dynamic Behavior of Embedded Software

Authors: Yoshifumi Sakamoto, Kouichi Ono, Takeo Nakada, Yousuke Kubo, Hiroto Yasuura

Abstract:

This paper describes a system-level SoC energy consumption estimation method based on a dynamic behavior of embedded software in the early stages of the SoC development. A major problem of SOC development is development rework caused by unreliable energy consumption estimation at the early stages. The energy consumption of an SoC used in embedded systems is strongly affected by the dynamic behavior of the software. At the early stages of SoC development, modeling with a high level of abstraction is required for both the dynamic behavior of the software, and the behavior of the SoC. We estimate the energy consumption by a UML model-based simulation. The proposed method is applied for an actual embedded system in an MFP. The energy consumption estimation of the SoC is more accurate than conventional methods and this proposed method is promising to reduce the chance of development rework in the SoC development. ∈

Keywords: SoC, Embedded Sytem, Energy Consumption, Dynamic behavior, UML, Modeling, Model-based simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2459
1106 Adaptive Extended Kalman Filter for Ballistic Missile Tracking

Authors: Gaurav Kumar, Dharmbir Prasad, Rudra Pratap Singh

Abstract:

In the current work, adaptive extended Kalman filter (AEKF) is presented for solution of ground radar based ballistic missile (BM) tracking problem in re-entry phase with unknown ballistic coefficient. The estimation of trajectory of any BM in re-entry phase is extremely difficult, because of highly non-linear motion of BM. The estimation accuracy of AEKF has been tested for a typical test target tracking problem adopted from literature. Further, the approach of AEKF is compared with extended Kalman filter (EKF). The simulation result indicates the superiority of the AEKF in solving joint parameter and state estimation problems.

Keywords: Adaptive, AEKF, ballistic missile, EKF, re-entry phase, target tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665
1105 A Family Cars- Life Cycle Cost (LCC)-Oriented Hybrid Modelling Approach Combining ANN and CBR

Authors: Xiaochuan Chen, Jianguo Yang, Beizhi Li

Abstract:

Design for cost (DFC) is a method that reduces life cycle cost (LCC) from the angle of designers. Multiple domain features mapping (MDFM) methodology was given in DFC. Using MDFM, we can use design features to estimate the LCC. From the angle of DFC, the design features of family cars were obtained, such as all dimensions, engine power and emission volume. At the conceptual design stage, cars- LCC were estimated using back propagation (BP) artificial neural networks (ANN) method and case-based reasoning (CBR). Hamming space was used to measure the similarity among cases in CBR method. Levenberg-Marquardt (LM) algorithm and genetic algorithm (GA) were used in ANN. The differences of LCC estimation model between CBR and artificial neural networks (ANN) were provided. ANN and CBR separately each method has its shortcomings. By combining ANN and CBR improved results accuracy was obtained. Firstly, using ANN selected some design features that affect LCC. Then using LCC estimation results of ANN could raise the accuracy of LCC estimation in CBR method. Thirdly, using ANN estimate LCC errors and correct errors in CBR-s estimation results if the accuracy is not enough accurate. Finally, economically family cars and sport utility vehicle (SUV) was given as LCC estimation cases using this hybrid approach combining ANN and CBR.

Keywords: case-based reasoning, life cycle cost (LCC), artificialneural networks (ANN), family cars

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960
1104 Methodology of Estimating Assembly Cost by MODAPTS

Authors: Heung Jae Cho, Jae Il Park

Abstract:

This paper presents the development of an MODAPTS based cost estimating system to help designers in estimating the manufacturing cost of a assembly products which is belonged from the workers in working fields. Competitiveness of manufacturing cost is getting harder because of the development of Information and telecommunication, but also globalization. Therefore, the accuracy of the assembly cost estimation is getting important. DFA and MODAPTS is useful method for measuring the working hour. But these two methods are used just as a timetable. Therefore, in this paper, we suggest the process of measuring the working hours by MODAPTS which includes the working field-s accurate information. In addition, we adduce the estimation method of accuracy assembly cost with the real information. This research could be useful for designers that can estimate the assembly cost more accurately, and also effective for the companies that which are concerned to reduce the product cost.

Keywords: Cost estimation, DFA, MODAPTS, Assembly cost

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3983
1103 Statistical Estimation of Spring-back Degree Using Texture Database

Authors: Takashi Sakai, Shinsaku Kikuta, Jun-ichi Koyama

Abstract:

Using a texture database, a statistical estimation of spring-back was conducted in this study on the basis of statistical analysis. Both spring-back in bending deformation and experimental data related to the crystal orientation show significant dispersion. Therefore, a probabilistic statistical approach was established for the proper quantification of these values. Correlation was examined among the parameters F(x) of spring-back, F(x) of the buildup fraction to three orientations after 92° bending, and F(x) at an as-received part on the basis of the three-parameter Weibull distribution. Consequent spring-back estimation using a texture database yielded excellent estimates compared with experimental values.

Keywords: Bending, Spring-back, Database, Crystallographic Orientation, Texture, SEM-EBSD, Weibull distribution, Statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
1102 Reachable Set Bounding Estimation for Distributed Delay Systems with Disturbances

Authors: Li Xu, Shouming Zhong

Abstract:

The reachable set bounding estimation for distributed delay systems with disturbances is a new problem. In this paper,we consider this problem subject to not only time varying delay and polytopic uncertainties but also distributed delay systems which is not studied fully untill now. we can obtain improved non-ellipsoidal reachable set estimation for neural networks with time-varying delay by the maximal Lyapunov-Krasovskii fuctional which is constructed as the pointwise maximum of a family of Lyapunov-Krasovskii fuctionals corresponds to vertexes of uncertain polytope.On the other hand,matrix inequalities containing only one scalar and Matlabs LMI Toolbox is utilized to give a non-ellipsoidal description of the reachable set.finally,numerical examples are given to illustrate the existing results.

Keywords: Reachable set, Distributed delay, Lyapunov-Krasovskii function, Polytopic uncertainties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
1101 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
1100 Software Effort Estimation Models Using Radial Basis Function Network

Authors: E. Praynlin, P. Latha

Abstract:

Software Effort Estimation is the process of estimating the effort required to develop software. By estimating the effort, the cost and schedule required to estimate the software can be determined. Accurate Estimate helps the developer to allocate the resource accordingly in order to avoid cost overrun and schedule overrun. Several methods are available in order to estimate the effort among which soft computing based method plays a prominent role. Software cost estimation deals with lot of uncertainty among all soft computing methods neural network is good in handling uncertainty. In this paper Radial Basis Function Network is compared with the back propagation network and the results are validated using six data sets and it is found that RBFN is best suitable to estimate the effort. The Results are validated using two tests the error test and the statistical test.

Keywords: Software cost estimation, Radial Basis Function Network (RBFN), Back propagation function network, Mean Magnitude of Relative Error (MMRE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2387
1099 Recent Trends in Nonlinear Methods of HRV Analysis: A Review

Authors: Ramesh K. Sunkaria

Abstract:

The linear methods of heart rate variability analysis such as non-parametric (e.g. fast Fourier transform analysis) and parametric methods (e.g. autoregressive modeling) has become an established non-invasive tool for marking the cardiac health, but their sensitivity and specificity were found to be lower than expected with positive predictive value <30%. This may be due to considering the RR-interval series as stationary and re-sampling them prior to their use for analysis, whereas actually it is not. This paper reviews the non-linear methods of HRV analysis such as correlation dimension, largest Lyupnov exponent, power law slope, fractal analysis, detrended fluctuation analysis, complexity measure etc. which are currently becoming popular as these uses the actual RR-interval series. These methods are expected to highly accurate cardiac health prognosis.

Keywords: chaos, nonlinear dynamics, sample entropy, approximate entropy, detrended fluctuation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2351
1098 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method

Authors: M. M. Qasaymeh, M. A. Khodeir

Abstract:

Subspace channel estimation methods have been studied widely, where the subspace of the covariance matrix is decomposed to separate the signal subspace from noise subspace. The decomposition is normally done by using either the eigenvalue decomposition (EVD) or the singular value decomposition (SVD) of the auto-correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. This paper considers the estimation of the multipath slow frequency hopping (FH) channel using noise space based method. In particular, an efficient method is proposed to estimate the multipath time delays by applying multiple signal classification (MUSIC) algorithm which is based on the null space extracted by the rank revealing LU (RRLU) factorization. As a result, precise information is provided by the RRLU about the numerical null space and the rank, (i.e., important tool in linear algebra). The simulation results demonstrate the effectiveness of the proposed novel method by approximately decreasing the computational complexity to the half as compared with RRQR methods keeping the same performance.

Keywords: Time Delay Estimation, RRLU, RRQR, MUSIC, LS-ESPRIT, LS-ESPRIT, Frequency Hopping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044
1097 Presentation of a Mix Algorithm for Estimating the Battery State of Charge Using Kalman Filter and Neural Networks

Authors: Amin Sedighfar, M. R. Moniri

Abstract:

Determination of state of charge (SOC) in today’s world becomes an increasingly important issue in all the applications that include a battery. In fact, estimation of the SOC is a fundamental need for the battery, which is the most important energy storage in Hybrid Electric Vehicles (HEVs), smart grid systems, drones, UPS and so on. Regarding those applications, the SOC estimation algorithm is expected to be precise and easy to implement. This paper presents an online method for the estimation of the SOC of Valve-Regulated Lead Acid (VRLA) batteries. The proposed method uses the well-known Kalman Filter (KF), and Neural Networks (NNs) and all of the simulations have been done with MATLAB software. The NN is trained offline using the data collected from the battery discharging process. A generic cell model is used, and the underlying dynamic behavior of the model has used two capacitors (bulk and surface) and three resistors (terminal, surface, and end), where the SOC determined from the voltage represents the bulk capacitor. The aim of this work is to compare the performance of conventional integration-based SOC estimation methods with a mixed algorithm. Moreover, by containing the effect of temperature, the final result becomes more accurate. 

Keywords: Kalman filter, neural networks, state-of-charge, VRLA battery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403
1096 Reliability Analysis of k-out-of-n : G System Using Triangular Intuitionistic Fuzzy Numbers

Authors: Tanuj Kumar, Rakesh Kumar Bajaj

Abstract:

In the present paper, we analyze the vague reliability of k-out-of-n : G system (particularly, series and parallel system) with independent and non-identically distributed components, where the reliability of the components are unknown. The reliability of each component has been estimated using statistical confidence interval approach. Then we converted these statistical confidence interval into triangular intuitionistic fuzzy numbers. Based on these triangular intuitionistic fuzzy numbers, the reliability of the k-out-of-n : G system has been calculated. Further, in order to implement the proposed methodology and to analyze the results of k-out-of-n : G system, a numerical example has been provided.

Keywords: Vague set, vague reliability, triangular intuitionistic fuzzy number, k-out-of-n : G system, series and parallel system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2981
1095 The Ability of Forecasting the Term Structure of Interest Rates Based On Nelson-Siegel and Svensson Model

Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović

Abstract:

Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector autoregressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is Neural networks using Nelson-Siegel estimation of yield curves.

Keywords: Nelson-Siegel model, Neural networks, Svensson model, Vector autoregressive model, Yield curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3248
1094 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 472
1093 Solar Cell Parameters Estimation Using Simulated Annealing Algorithm

Authors: M. R. AlRashidi, K. M. El-Naggar, M. F. AlHajri

Abstract:

This paper presents Simulated Annealing based approach to estimate solar cell model parameters. Single diode solar cell model is used in this study to validate the proposed approach outcomes. The developed technique is used to estimate different model parameters such as generated photocurrent, saturation current, series resistance, shunt resistance, and ideality factor that govern the current-voltage relationship of a solar cell. A practical case study is used to test and verify the consistency of accurately estimating various parameters of single diode solar cell model. Comparative study among different parameter estimation techniques is presented to show the effectiveness of the developed approach.

Keywords: Simulated Annealing, Parameter Estimation, Solar Cell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
1092 A Pipelined FSBM Hardware Architecture for HTDV-H.26x

Authors: H. Loukil, A. Ben Atitallah, F. Ghozzi, M. A. Ben Ayed, N. Masmoudi

Abstract:

In MPEG and H.26x standards, to eliminate the temporal redundancy we use motion estimation. Given that the motion estimation stage is very complex in terms of computational effort, a hardware implementation on a re-configurable circuit is crucial for the requirements of different real time multimedia applications. In this paper, we present hardware architecture for motion estimation based on "Full Search Block Matching" (FSBM) algorithm. This architecture presents minimum latency, maximum throughput, full utilization of hardware resources such as embedded memory blocks, and combining both pipelining and parallel processing techniques. Our design is described in VHDL language, verified by simulation and implemented in a Stratix II EP2S130F1020C4 FPGA circuit. The experiment result show that the optimum operating clock frequency of the proposed design is 89MHz which achieves 160M pixels/sec.

Keywords: SAD, FSBM, Hardware Implementation, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
1091 Algebraic Approach for the Reconstruction of Linear and Convolutional Error Correcting Codes

Authors: Johann Barbier, Guillaume Sicot, Sebastien Houcke

Abstract:

In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.

Keywords: Blind estimation parameters, error correcting codes, non-cooperative context, reconstruction algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
1090 Blind Source Separation based on the Estimation for the Number of the Blind Sources under a Dynamic Acoustic Environment

Authors: Takaaki Ishibashi

Abstract:

Independent component analysis can estimate unknown source signals from their mixtures under the assumption that the source signals are statistically independent. However, in a real environment, the separation performance is often deteriorated because the number of the source signals is different from that of the sensors. In this paper, we propose an estimation method for the number of the sources based on the joint distribution of the observed signals under two-sensor configuration. From several simulation results, it is found that the number of the sources is coincident to that of peaks in the histogram of the distribution. The proposed method can estimate the number of the sources even if it is larger than that of the observed signals. The proposed methods have been verified by several experiments.

Keywords: blind source separation, independent component analysys, estimation for the number of the blind sources, voice activity detection, target extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
1089 Reasons for the Slow Uptake of Embodied Carbon Estimation in the Sri Lankan Building Sector

Authors: Amalka Nawarathna, Nirodha Fernando, Zaid Alwan

Abstract:

Global carbon reduction is not merely a responsibility of environmentally advanced developed countries, but also a responsibility of developing countries regardless of their less impact on global carbon emissions. In recognition of that, Sri Lanka as a developing country has initiated promoting green building construction as one reduction strategy. However, notwithstanding the increasing attention on Embodied Carbon (EC) reduction in the global building sector, they still mostly focus on Operational Carbon (OC) reduction (through improving operational energy). An adequate attention has not yet been given on EC estimation and reduction. Therefore, this study aims to identify the reasons for the slow uptake of EC estimation in the Sri Lankan building sector. To achieve this aim, 16 numbers of global barriers to estimate EC were identified through existing literature. They were then subjected to a pilot survey to identify the significant reasons for the slow uptake of EC estimation in the Sri Lankan building sector. A questionnaire with a three-point Likert scale was used to this end. The collected data were analysed using descriptive statistics. The findings revealed that 11 out of 16 challenges/ barriers are highly relevant as reasons for the slow uptake in estimating EC in buildings in Sri Lanka while the other five challenges/ barriers remain as moderately relevant reasons. Further, the findings revealed that there are no low relevant reasons. Eventually, the paper concluded that all the known reasons are significant to the Sri Lankan building sector and it is necessary to address them in order to upturn the attention on EC reduction.

Keywords: Embodied carbon emissions, embodied carbon estimation, global carbon reduction, Sri Lankan building sector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
1088 Thermal Modeling of Dry-Transformers and Estimating Temperature Rise

Authors: M. Ghareh, L. Sepahi

Abstract:

Temperature rise in a transformer depends on variety of parameters such as ambient temperature, output current and type of the core. Considering these parameters, temperature rise estimation is still complicated procedure. In this paper, we present a new model based on simple electrical equivalent circuit. This method avoids the complication associated to accurate estimation and is in very good agreement with practice.

Keywords: Thermal modeling, temperature rise, equivalent thermal circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3812
1087 Memory Estimation of Internet Server Using Queuing Theory: Comparative Study between M/G/1, G/M/1 and G/G/1 Queuing Model

Authors: L. K. Singh, Riktesh Srivastava

Abstract:

How to effectively allocate system resource to process the Client request by Gateway servers is a challenging problem. In this paper, we propose an improved scheme for autonomous performance of Gateway servers under highly dynamic traffic loads. We devise a methodology to calculate Queue Length and Waiting Time utilizing Gateway Server information to reduce response time variance in presence of bursty traffic. The most widespread contemplation is performance, because Gateway Servers must offer cost-effective and high-availability services in the elongated period, thus they have to be scaled to meet the expected load. Performance measurements can be the base for performance modeling and prediction. With the help of performance models, the performance metrics (like buffer estimation, waiting time) can be determined at the development process. This paper describes the possible queue models those can be applied in the estimation of queue length to estimate the final value of the memory size. Both simulation and experimental studies using synthesized workloads and analysis of real-world Gateway Servers demonstrate the effectiveness of the proposed system.

Keywords: M/M/1, M/G/1, G/M/1, G/G/1, Gateway Servers, Buffer Estimation, Waiting Time, Queuing Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
1086 Effect of Channel Estimation on Capacity of MIMO System Employing Circular or Linear Receiving Array Antennas

Authors: Xia Liu, Marek E. Bialkowski

Abstract:

This paper reports on investigations into capacity of a Multiple Input Multiple Output (MIMO) wireless communication system employing a uniform linear array (ULA) at the transmitter and either a uniform linear array (ULA) or a uniform circular array (UCA) antenna at the receiver. The transmitter is assumed to be surrounded by scattering objects while the receiver is postulated to be free from scattering objects. The Laplacian distribution of angle of arrival (AOA) of a signal reaching the receiver is postulated. Calculations of the MIMO system capacity are performed for two cases without and with the channel estimation errors. For estimating the MIMO channel, the scaled least square (SLS) and minimum mean square error (MMSE) methods are considered.

Keywords: MIMO, channel capacity, channel estimation, ULA, UCA, spatial correlation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
1085 Detection and Pose Estimation of People in Images

Authors: Mousa Mojarrad, Amir Masoud Rahmani, Mehrab Mohebi

Abstract:

Detection, feature extraction and pose estimation of people in images and video is made challenging by the variability of human appearance, the complexity of natural scenes and the high dimensionality of articulated body models and also the important field in Image, Signal and Vision Computing in recent years. In this paper, four types of people in 2D dimension image will be tested and proposed. The system will extract the size and the advantage of them (such as: tall fat, short fat, tall thin and short thin) from image. Fat and thin, according to their result from the human body that has been extract from image, will be obtained. Also the system extract every size of human body such as length, width and shown them in output.

Keywords: Analysis of Image Processing, Canny Edge Detection, Human Body Recognition, Measurement, Pose Estimation, 2D Human Dimension.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2300
1084 Random Access in IoT Using Naïve Bayes Classification

Authors: Alhusein Almahjoub, Dongyu Qiu

Abstract:

This paper deals with the random access procedure in next-generation networks and presents the solution to reduce total service time (TST) which is one of the most important performance metrics in current and future internet of things (IoT) based networks. The proposed solution focuses on the calculation of optimal transmission probability which maximizes the success probability and reduces TST. It uses the information of several idle preambles in every time slot, and based on it, it estimates the number of backlogged IoT devices using Naïve Bayes estimation which is a type of supervised learning in the machine learning domain. The estimation of backlogged devices is necessary since optimal transmission probability depends on it and the eNodeB does not have information about it. The simulations are carried out in MATLAB which verify that the proposed solution gives excellent performance.

Keywords: Random access, LTE/LTE-A, 5G, machine learning, Naïve Bayes estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 448
1083 Real Time Video Based Smoke Detection Using Double Optical Flow Estimation

Authors: Anton Stadler, Thorsten Ike

Abstract:

In this paper, we present a video based smoke detection algorithm based on TVL1 optical flow estimation. The main part of the algorithm is an accumulating system for motion angles and upward motion speed of the flow field. We optimized the usage of TVL1 flow estimation for the detection of smoke with very low smoke density. Therefore, we use adapted flow parameters and estimate the flow field on difference images. We show in theory and in evaluation that this improves the performance of smoke detection significantly. We evaluate the smoke algorithm using videos with different smoke densities and different backgrounds. We show that smoke detection is very reliable in varying scenarios. Further we verify that our algorithm is very robust towards crowded scenes disturbance videos.

Keywords: Low density, optical flow, upward smoke motion, video based smoke detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419
1082 Direction of Arrival Estimation Based on a Single Port Smart Antenna Using MUSIC Algorithm with Periodic Signals

Authors: Chen Sun, Nemai Chandra Karmakar

Abstract:

A novel direction-of-arrival (DOA) estimation technique, which uses a conventional multiple signal classification (MUSIC) algorithm with periodic signals, is applied to a single RF-port parasitic array antenna for direction finding. Simulation results show that the proposed method gives high resolution (1 degree) DOA estimation in an uncorrelated signal environment. The novelty lies in that the MUSIC algorithm is applied to a simplified antenna configuration. Only one RF port and one analogue-to-digital converter (ADC) are used in this antenna, which features low DC power consumption, low cost, and ease of fabrication. Modifications to the conventional MUSIC algorithm do not bring much additional complexity. The proposed technique is also free from the negative influence by the mutual coupling between elements. Therefore, the technique has great potential to be implemented into the existing wireless mobile communications systems, especially at the power consumption limited mobile terminals, to provide additional position location (PL) services.

Keywords: Direction-of-arrival (DOA) estimation, electronically steerable parasitic array radiator (ESPAR), multiple single classifications (MUSIC), position location.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2994
1081 A Modified Spiral Search Algorithm and Its Embedded System Architecture Design

Authors: Nikolaos Kroupis, Minas Dasygenis, Dimitrios Soudris, Antonios Thanailakis

Abstract:

One of the most growing areas in the embedded community is multimedia devices. Multimedia devices incorporate a number of complicated functions for their operation, like motion estimation. A multitude of different implementations have been proposed to reduce motion estimation complexity, such as spiral search. We have studied the implementations of spiral search and identified areas of improvement. We propose a modified spiral search algorithm, with lower computational complexity compared to the original spiral search. We have implemented our algorithm on an embedded ARM based architecture, with custom memory hierarchy. The resulting system yields energy consumption reduction up to 64% and performance increase up to 77%, with a small penalty of 2.3 dB, in average, of video quality compared with the original spiral search algorithm.

Keywords: Spiral Search, Motion Estimation, Embedded Systems, Low Power

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
1080 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
1079 Using Interval Constrained Petri Nets for the Fuzzy Regulation of Quality: Case of Assembly Process Mechanics

Authors: Nabli L., Dhouibi H., Collart Dutilleul S., Craye E.

Abstract:

The indistinctness of the manufacturing processes makes that a parts cannot be realized in an absolutely exact way towards the specifications on the dimensions. It is thus necessary to assume that the effectively realized product has to belong in a very strict way to compatible intervals with a correct functioning of the parts. In this paper we present an approach based on mixing tow different characteristics theories, the fuzzy system and Petri net system. This tool has been proposed to model and control the quality in an assembly system. A robust command of a mechanical assembly process is presented as an application. This command will then have to maintain the specifications interval of parts in front of the variations. It also illustrates how the technique reacts when the product quality is high, medium, or low.

Keywords: Petri nets, production rate, performance evaluation, tolerant system, fuzzy sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1277