Search results for: Naïve Bayes estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2018

Search results for: Naïve Bayes estimation

1958 An Application to Predict the Best Study Path for Information Technology Students in Learning Institutes

Authors: L. S. Chathurika

Abstract:

Early prediction of student performance is an important factor to be gained academic excellence. Whatever the study stream in secondary education, students lay the foundation for higher studies during the first year of their degree or diploma program in Sri Lanka. The information technology (IT) field has certain improvements in the education domain by selecting specialization areas to show the talents and skills of students. These specializations can be software engineering, network administration, database administration, multimedia design, etc. After completing the first-year, students attempt to select the best path by considering numerous factors. The purpose of this experiment is to predict the best study path using machine learning algorithms. Five classification algorithms: decision tree, support vector machine, artificial neural network, Naïve Bayes, and logistic regression are selected and tested. The support vector machine obtained the highest accuracy, 82.4%. Then affecting features are recognized to select the best study path.

Keywords: algorithm, classification, evaluation, features, testing, training

Procedia PDF Downloads 97
1957 Model for Introducing Products to New Customers through Decision Tree Using Algorithm C4.5 (J-48)

Authors: Komol Phaisarn, Anuphan Suttimarn, Vitchanan Keawtong, Kittisak Thongyoun, Chaiyos Jamsawang

Abstract:

This article is intended to analyze insurance information which contains information on the customer decision when purchasing life insurance pay package. The data were analyzed in order to present new customers with Life Insurance Perfect Pay package to meet new customers’ needs as much as possible. The basic data of insurance pay package were collect to get data mining; thus, reducing the scattering of information. The data were then classified in order to get decision model or decision tree using Algorithm C4.5 (J-48). In the classification, WEKA tools are used to form the model and testing datasets are used to test the decision tree for the accurate decision. The validation of this model in classifying showed that the accurate prediction was 68.43% while 31.25% were errors. The same set of data were then tested with other models, i.e. Naive Bayes and Zero R. The results showed that J-48 method could predict more accurately. So, the researcher applied the decision tree in writing the program used to introduce the product to new customers to persuade customers’ decision making in purchasing the insurance package that meets the new customers’ needs as much as possible.

Keywords: decision tree, data mining, customers, life insurance pay package

Procedia PDF Downloads 402
1956 A Predictive Machine Learning Model of the Survival of Female-led and Co-Led Small and Medium Enterprises in the UK

Authors: Mais Khader, Xingjie Wei

Abstract:

This research sheds light on female entrepreneurs by providing new insights on the survival predictions of companies led by females in the UK. This study aims to build a predictive machine learning model of the survival of female-led & co-led small & medium enterprises (SMEs) in the UK over the period 2000-2020. The predictive model built utilised a combination of financial and non-financial features related to both companies and their directors to predict SMEs' survival. These features were studied in terms of their contribution to the resultant predictive model. Five machine learning models are used in the modelling: Decision tree, AdaBoost, Naïve Bayes, Logistic regression and SVM. The AdaBoost model had the highest performance of the five models, with an accuracy of 73% and an AUC of 80%. The results show high feature importance in predicting companies' survival for company size, management experience, financial performance, industry, region, and females' percentage in management.

Keywords: company survival, entrepreneurship, females, machine learning, SMEs

Procedia PDF Downloads 56
1955 Reducing Crash Risk at Intersections with Safety Improvements

Authors: Upal Barua

Abstract:

Crash risk at intersections is a critical safety issue. This paper examines the effectiveness of removing an existing off-set at an intersection by realignment, in reducing crashes. Empirical Bayes method was applied to conduct a before-and-after study to assess the effect of this safety improvement. The Transportation Safety Improvement Program in Austin Transportation Department completed several safety improvement projects at high crash intersections with a view to reducing crashes. One of the common safety improvement techniques applied was the realignment of intersection approaches removing an existing off-set. This paper illustrates how this safety improvement technique is applied at a high crash intersection from inception to completion. This paper also highlights the significant crash reductions achieved from this safety improvement technique applying Empirical Bayes method in a before-and-after study. The result showed that realignment of intersection approaches removing an existing off-set can reduce crashes by 53%. This paper also features the state of the art techniques applied in planning, engineering, designing and construction of this safety improvement, key factors driving the success, and lessons learned in the process.

Keywords: crash risk, intersection, off-set, safety improvement technique, before-and-after study, empirical Bayes method

Procedia PDF Downloads 210
1954 Residual Lifetime Estimation for Weibull Distribution by Fusing Expert Judgements and Censored Data

Authors: Xiang Jia, Zhijun Cheng

Abstract:

The residual lifetime of a product is the operation time between the current time and the time point when the failure happens. The residual lifetime estimation is rather important in reliability analysis. To predict the residual lifetime, it is necessary to assume or verify a particular distribution that the lifetime of the product follows. And the two-parameter Weibull distribution is frequently adopted to describe the lifetime in reliability engineering. Due to the time constraint and cost reduction, a life testing experiment is usually terminated before all the units have failed. Then the censored data is usually collected. In addition, other information could also be obtained for reliability analysis. The expert judgements are considered as it is common that the experts could present some useful information concerning the reliability. Therefore, the residual lifetime is estimated for Weibull distribution by fusing the censored data and expert judgements in this paper. First, the closed-forms concerning the point estimate and confidence interval for the residual lifetime under the Weibull distribution are both presented. Next, the expert judgements are regarded as the prior information and how to determine the prior distribution of Weibull parameters is developed. For completeness, the cases that there is only one, and there are more than two expert judgements are both focused on. Further, the posterior distribution of Weibull parameters is derived. Considering that it is difficult to derive the posterior distribution of residual lifetime, a sample-based method is proposed to generate the posterior samples of Weibull parameters based on the Monte Carlo Markov Chain (MCMC) method. And these samples are used to obtain the Bayes estimation and credible interval for the residual lifetime. Finally, an illustrative example is discussed to show the application. It demonstrates that the proposed method is rather simple, satisfactory, and robust.

Keywords: expert judgements, information fusion, residual lifetime, Weibull distribution

Procedia PDF Downloads 110
1953 Time Delay Estimation Using Signal Envelopes for Synchronisation of Recordings

Authors: Sergei Aleinik, Mikhail Stolbov

Abstract:

In this work, a method of time delay estimation for dual-channel acoustic signals (speech, music, etc.) recorded under reverberant conditions is investigated. Standard methods based on cross-correlation of the signals show poor results in cases involving strong reverberation, large distances between microphones and asynchronous recordings. Under similar conditions, a method based on cross-correlation of temporal envelopes of the signals delivers a delay estimation of acceptable quality. This method and its properties are described and investigated in detail, including its limits of applicability. The method’s optimal parameter estimation and a comparison with other known methods of time delay estimation are also provided.

Keywords: cross-correlation, delay estimation, signal envelope, signal processing

Procedia PDF Downloads 454
1952 A Scalable Model of Fair Socioeconomic Relations Based on Blockchain and Machine Learning Algorithms-1: On Hyperinteraction and Intuition

Authors: Merey M. Sarsengeldin, Alexandr S. Kolokhmatov, Galiya Seidaliyeva, Alexandr Ozerov, Sanim T. Imatayeva

Abstract:

This series of interdisciplinary studies is an attempt to investigate and develop a scalable model of fair socioeconomic relations on the base of blockchain using positive psychology techniques and Machine Learning algorithms for data analytics. In this particular study, we use hyperinteraction approach and intuition to investigate their influence on 'wisdom of crowds' via created mobile application which was created for the purpose of this research. Along with the public blockchain and private Decentralized Autonomous Organization (DAO) which were elaborated by us on the base of Ethereum blockchain, a model of fair financial relations of members of DAO was developed. We developed a smart contract, so-called, Fair Price Protocol and use it for implementation of model. The data obtained from mobile application was analyzed by ML algorithms. A model was tested on football matches.

Keywords: blockchain, Naïve Bayes algorithm, hyperinteraction, intuition, wisdom of crowd, decentralized autonomous organization

Procedia PDF Downloads 139
1951 Confidence Intervals for Quantiles in the Two-Parameter Exponential Distributions with Type II Censored Data

Authors: Ayman Baklizi

Abstract:

Based on type II censored data, we consider interval estimation of the quantiles of the two-parameter exponential distribution and the difference between the quantiles of two independent two-parameter exponential distributions. We derive asymptotic intervals, Bayesian, as well as intervals based on the generalized pivot variable. We also include some bootstrap intervals in our comparisons. The performance of these intervals is investigated in terms of their coverage probabilities and expected lengths.

Keywords: asymptotic intervals, Bayes intervals, bootstrap, generalized pivot variables, two-parameter exponential distribution, quantiles

Procedia PDF Downloads 385
1950 VaR Estimation Using the Informational Content of Futures Traded Volume

Authors: Amel Oueslati, Olfa Benouda

Abstract:

New Value at Risk (VaR) estimation is proposed and investigated. The well-known two stages Garch-EVT approach uses conditional volatility to generate one step ahead forecasts of VaR. With daily data for twelve stocks that decompose the Dow Jones Industrial Average (DJIA) index, this paper incorporates the volume in the first stage volatility estimation. Afterwards, the forecasting ability of this conditional volatility concerning the VaR estimation is compared to that of a basic volatility model without considering any trading component. The results are significant and bring out the importance of the trading volume in the VaR measure.

Keywords: Garch-EVT, value at risk, volume, volatility

Procedia PDF Downloads 255
1949 Depth Estimation in DNN Using Stereo Thermal Image Pairs

Authors: Ahmet Faruk Akyuz, Hasan Sakir Bilge

Abstract:

Depth estimation using stereo images is a challenging problem in computer vision. Many different studies have been carried out to solve this problem. With advancing machine learning, tackling this problem is often done with neural network-based solutions. The images used in these studies are mostly in the visible spectrum. However, the need to use the Infrared (IR) spectrum for depth estimation has emerged because it gives better results than visible spectra in some conditions. At this point, we recommend using thermal-thermal (IR) image pairs for depth estimation. In this study, we used two well-known networks (PSMNet, FADNet) with minor modifications to demonstrate the viability of this idea.

Keywords: thermal stereo matching, deep neural networks, CNN, Depth estimation

Procedia PDF Downloads 236
1948 Parameter Estimation of Induction Motors by PSO Algorithm

Authors: A. Mohammadi, S. Asghari, M. Aien, M. Rashidinejad

Abstract:

After emergent of alternative current networks and their popularity, asynchronous motors became more widespread than other kinds of industrial motors. In order to control and run these motors efficiently, an accurate estimation of motor parameters is needed. There are different methods to obtain these parameters such as rotor locked test, no load test, DC test, analytical methods, and so on. The most common drawback of these methods is their inaccuracy in estimation of some motor parameters. In order to remove this concern, a novel method for parameter estimation of induction motors using particle swarm optimization (PSO) algorithm is proposed. In the proposed method, transient state of motor is used for parameter estimation. Comparison of the simulation results purtuined to the PSO algorithm with other available methods justifies the effectiveness of the proposed method.

Keywords: induction motor, motor parameter estimation, PSO algorithm, analytical method

Procedia PDF Downloads 603
1947 Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network

Authors: Cheng Fang, Lingwei Quan, Cunyue Lu

Abstract:

Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks.

Keywords: computer vision, pose estimation, pose tracking, Siamese network

Procedia PDF Downloads 125
1946 Cross Project Software Fault Prediction at Design Phase

Authors: Pradeep Singh, Shrish Verma

Abstract:

Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. The earlier we predict the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven data sets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.

Keywords: software metrics, fault prediction, cross project, within project.

Procedia PDF Downloads 309
1945 Feature Weighting Comparison Based on Clustering Centers in the Detection of Diabetic Retinopathy

Authors: Kemal Polat

Abstract:

In this paper, three feature weighting methods have been used to improve the classification performance of diabetic retinopathy (DR). To classify the diabetic retinopathy, features extracted from the output of several retinal image processing algorithms, such as image-level, lesion-specific and anatomical components, have been used and fed them into the classifier algorithms. The dataset used in this study has been taken from University of California, Irvine (UCI) machine learning repository. Feature weighting methods including the fuzzy c-means clustering based feature weighting, subtractive clustering based feature weighting, and Gaussian mixture clustering based feature weighting, have been used and compered with each other in the classification of DR. After feature weighting, five different classifier algorithms comprising multi-layer perceptron (MLP), k- nearest neighbor (k-NN), decision tree, support vector machine (SVM), and Naïve Bayes have been used. The hybrid method based on combination of subtractive clustering based feature weighting and decision tree classifier has been obtained the classification accuracy of 100% in the screening of DR. These results have demonstrated that the proposed hybrid scheme is very promising in the medical data set classification.

Keywords: machine learning, data weighting, classification, data mining

Procedia PDF Downloads 301
1944 Polarity Classification of Social Media Comments in Turkish

Authors: Migena Ceyhan, Zeynep Orhan, Dimitrios Karras

Abstract:

People in modern societies are continuously sharing their experiences, emotions, and thoughts in different areas of life. The information reaches almost everyone in real-time and can have an important impact in shaping people’s way of living. This phenomenon is very well recognized and advantageously used by the market representatives, trying to earn the most from this means. Given the abundance of information, people and organizations are looking for efficient tools that filter the countless data into important information, ready to analyze. This paper is a modest contribution in this field, describing the process of automatically classifying social media comments in the Turkish language into positive or negative. Once data is gathered and preprocessed, feature sets of selected single words or groups of words are build according to the characteristics of language used in the texts. These features are used later to train, and test a system according to different machine learning algorithms (Naïve Bayes, Sequential Minimal Optimization, J48, and Bayesian Linear Regression). The resultant high accuracies can be important feedback for decision-makers to improve the business strategies accordingly.

Keywords: feature selection, machine learning, natural language processing, sentiment analysis, social media reviews

Procedia PDF Downloads 120
1943 Characteristic Function in Estimation of Probability Distribution Moments

Authors: Vladimir S. Timofeev

Abstract:

In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique, author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.

Keywords: characteristic function, distributional moments, robustness, outlier, statistical estimation problem, statistical simulation

Procedia PDF Downloads 464
1942 Considering the Reliability of Measurements Issue in Distributed Adaptive Estimation Algorithms

Authors: Wael M. Bazzi, Amir Rastegarnia, Azam Khalili

Abstract:

In this paper we consider the issue of reliability of measurements in distributed adaptive estimation problem. To this aim, we assume a sensor network with different observation noise variance among the sensors and propose new estimation method based on incremental distributed least mean-square (IDLMS) algorithm. The proposed method contains two phases: I) Estimation of each sensors observation noise variance, and II) Estimation of the desired parameter using the estimated observation variances. To deal with the reliability of measurements, in the second phase of the proposed algorithm, the step-size parameter is adjusted for each sensor according to its observation noise variance. As our simulation results show, the proposed algorithm considerably improves the performance of the IDLMS algorithm in the same condition.

Keywords: adaptive filter, distributed estimation, sensor network, IDLMS algorithm

Procedia PDF Downloads 608
1941 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas

Abstract:

This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.

Keywords: biomass concentration, extended Kalman filter, particle filter, state estimation, specific growth rate

Procedia PDF Downloads 399
1940 Estimation of Fuel Cost Function Characteristics Using Cuckoo Search

Authors: M. R. Al-Rashidi, K. M. El-Naggar, M. F. Al-Hajri

Abstract:

The fuel cost function describes the electric power generation-cost relationship in thermal plants, hence, it sheds light on economical aspects of power industry. Different models have been proposed to describe this relationship with the quadratic function model being the most popular one. Parameters of second order fuel cost function are estimated in this paper using cuckoo search algorithm. It is a new population based meta-heuristic optimization technique that has been used in this study primarily as an accurate estimation tool. Its main features are flexibility, simplicity, and effectiveness when compared to other estimation techniques. The parameter estimation problem is formulated as an optimization one with the goal being minimizing the error associated with the estimated parameters. A case study is considered in this paper to illustrate cuckoo search promising potential as a valuable estimation and optimization technique.

Keywords: cuckoo search, parameters estimation, fuel cost function, economic dispatch

Procedia PDF Downloads 547
1939 ML-Based Blind Frequency Offset Estimation Schemes for OFDM Systems in Non-Gaussian Noise Environments

Authors: Keunhong Chae, Seokho Yoon

Abstract:

This paper proposes frequency offset (FO) estimation schemes robust to the non-Gaussian noise for orthogonal frequency division multiplexing (OFDM) systems. A maximum-likelihood (ML) scheme and a low-complexity estimation scheme are proposed by applying the probability density function of the cyclic prefix of OFDM symbols to the ML criterion. From simulation results, it is confirmed that the proposed schemes offer a significant FO estimation performance improvement over the conventional estimation scheme in non-Gaussian noise environments.

Keywords: frequency offset, cyclic prefix, maximum-likelihood, non-Gaussian noise, OFDM

Procedia PDF Downloads 447
1938 Design of Transmit Beamspace and DOA Estimation in MIMO Radar

Authors: S. Ilakkiya, A. Merline

Abstract:

A multiple-input multiple-output (MIMO) radar systems use modulated waveforms and directive antennas to transmit electromagnetic energy into a specific volume in space to search for targets. This paper deals with the design of transmit beamspace matrix and DOA estimation for multiple-input multiple-output (MIMO) radar with collocated antennas.The design of transmit beamspace matrix is based on minimizing the difference between a desired transmit beampattern and the actual one while enforcing the constraint of uniform power distribution across the transmit array elements. Rotational invariance property is established at the transmit array by imposing a specific structure on the beamspace matrix. Semidefinite programming and spatial-division based design (SDD) are also designed separately. In MIMO radar systems, DOA estimation is an essential process to determine the direction of incoming signals and thus to direct the beam of the antenna array towards the estimated direction. This estimation deals with non-adaptive spectral estimation and adaptive spectral estimation techniques. The design of the transmit beamspace matrix and spectral estimation techniques are studied through simulation.

Keywords: adaptive and non-adaptive spectral estimation, direction of arrival estimation, MIMO radar, rotational invariance property, transmit, receive beamforming

Procedia PDF Downloads 481
1937 Comparative Analysis of Two Approaches to Joint Signal Detection, ToA and AoA Estimation in Multi-Element Antenna Arrays

Authors: Olesya Bolkhovskaya, Alexey Davydov, Alexander Maltsev

Abstract:

In this paper two approaches to joint signal detection, time of arrival (ToA) and angle of arrival (AoA) estimation in multi-element antenna array are investigated. Two scenarios were considered: first one, when the waveform of the useful signal is known a priori and, second one, when the waveform of the desired signal is unknown. For first scenario, the antenna array signal processing based on multi-element matched filtering (MF) with the following non-coherent detection scheme and maximum likelihood (ML) parameter estimation blocks is exploited. For second scenario, the signal processing based on the antenna array elements covariance matrix estimation with the following eigenvector analysis and ML parameter estimation blocks is applied. The performance characteristics of both signal processing schemes are thoroughly investigated and compared for different useful signals and noise parameters.

Keywords: antenna array, signal detection, ToA, AoA estimation

Procedia PDF Downloads 463
1936 A New IFO Estimation Scheme for Orthogonal Frequency Division Multiplexing Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

We address a new integer frequency offset (IFO) estimation scheme with an aid of a pilot for orthogonal frequency division multiplexing systems. After correlating each continual pilot with a predetermined scattered pilot, the correlation value is again correlated to alleviate the influence of the timing offset. From numerical results, it is demonstrated that the influence of the timing offset on the IFO estimation is significantly decreased.

Keywords: estimation, integer frequency offset, OFDM, timing offset

Procedia PDF Downloads 532
1935 6D Posture Estimation of Road Vehicles from Color Images

Authors: Yoshimoto Kurihara, Tad Gonsalves

Abstract:

Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.

Keywords: 6D posture estimation, image recognition, deep learning, AlexNet

Procedia PDF Downloads 117
1934 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić

Abstract:

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation

Procedia PDF Downloads 357
1933 A Mathematical Model of Power System State Estimation for Power Flow Solution

Authors: F. Benhamida, A. Graa, L. Benameur, I. Ziane

Abstract:

The state estimation of the electrical power system operation state is very important for supervising task. With the nonlinearity of the AC power flow model, the state estimation problem (SEP) is a nonlinear mathematical problem with many local optima. This paper treat the mathematical model for the SEP and the monitoring of the nonlinear systems of great dimensions with an application on power electrical system, the modelling, the analysis and state estimation synthesis in order to supervise the power system behavior. in fact, it is very difficult, to see impossible, (for reasons of accessibility, techniques and/or of cost) to measure the excessive number of the variables of state in a large-sized system. It is thus important to develop software sensors being able to produce a reliable estimate of the variables necessary for the diagnosis and also for the control.

Keywords: power system, state estimation, robustness, observability

Procedia PDF Downloads 487
1932 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals

Authors: Naser Safdarian, Nader Jafarnia Dabanloo

Abstract:

In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.

Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition

Procedia PDF Downloads 427
1931 Sialic Acid Profile and Sialidase Activity in HIV-Infected Individuals

Authors: Hadiza Abdullahi

Abstract:

Sialic Acids and sialidases have been implicated in many disease states particularly bacterial and viral infections which are common opportunist infections of HIV disease. Their role in HIV/AIDS is contemplated. A study was carried out to determine Sialic Acid profile and Sialidase Activity in HIV infected and Apparently Healthy individuals, and also determine the relationship between the sialic acid levels and sialidase activity. Blood samples were collected from 200 subjects (150 HIV infected individuals and 50 apparently healthy individuals divided into four groups- HIV ART Naïve, HIV Stable (on ART but have been stable with no clinical episodes), HIV-OI (on ART with opportunistic infections), and Apparently Healthy). Complete Blood Count, Erythrocyte Surface Sialic Acid (ESSA), Free Serum Sialic Acid (FSSA) concentrations and Sialidase activity were determined for all 200 subjects. Analysis of variance (ANOVA) was used to compare the results of the different groups of HIV infected individuals as well as controls. The mean haemoglobin (HGB), Packed Cell Volume (PCV) and Red Blood Cells (RBC) concentrations were significantly lower (P ≤ 0.05) in the HIV groups compared with the apparently healthy groups. Anaemia and neutropaenia were the most common heamatological abnormalities observed in this study with highest prevalence of anaemia found in the ART naive group. The mean FSSA was 0.4±0.4mg/ml. There was a significant difference (p ≤ 0.05) between some groups. The highest levels of FSSA was observed in the HIV ART naïve (0.65±0.5mg/ml). The mean ESSA value for the study population was 0.54±0.35mg/ml with no significant difference (p ≤ 0.05) between groups. The mean sialidase activity values were 0.52±0.1 µmol/min/µl, 0.40±0.1 µmol/min/µl, 0.45±0.1 µmol/min/µl and 0.41±0.1 µmol/min/µl for the HIV ART naïve, HIV stable, HIV/OIs and apparently healthy groups respectively. No significant difference (p ≤ 0.05) was found between groups and also in gender and age. The finding in this study of higher mean sialidase activity and FSSA levels in the ART naïve HIV group compared with other groups indicate that the virus and other opportunistic pathogens may be sialidase producers in vivo which cleave off sialic acids from erythrocytes surface, leading to high levels of FSSA, anaemia and neutropaenia seen in this group. The higher ESSA concentration found in the HIV stable group along with lowest FSSA concentration in the group suggests the presence of sialyltransferases.

Keywords: erythrocyte surface sialic acid, free serum sialic acid, HIV, sialidase

Procedia PDF Downloads 177
1930 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, construction projects, cost estimation, NRM, ontology

Procedia PDF Downloads 512
1929 Cellular Senescence and Neuroinflammation Following Controlled Cortical Impact Traumatic Brain Injury in Juvenile Mice

Authors: Zahra F. Al-Khateeb, Shenel Shekerzade, Hasna Boumenar, Siân M. Henson, Jordi L. Tremoleda, A. T. Michael-Titus

Abstract:

Traumatic brain injury (TBI) is the leading cause of disability and death in young adults and also increases the risk ofneurodegeneration. The mechanisms linking moderate to severe TBI to neurodegeneration are not known. It has been proposed that cellular senescence inductionpost-injury could amplify neuroinflammation and induce long-term changes. The impact of these processes after injury to an immature brain has not been characterised yet. We carried out a controlled cortical impact injury (CCI) in juvenile 1 month-old male CD1 mice. Animals were anesthetised and received a unilateral CCI injury. The sham group received anaesthesia and had a craniotomy. A naïve group had no intervention. The brain tissue was analysed at 5 days and 35 days post-injury using immunohistochemistry and markers for microglia, astrocytes, and senescence. Compared tonaïve animals, injured mice showed an increased microglial and astrocytic reaction early post-injury, as reflected in Iba1 and GFAP markers, respectively; the GFAP increase persisted in the later phase. The senescence analysis showed a significant increase inγH2AX-53BP1 nuclear foci, 8-oxoguanine, p19ARF, p16INK4a, and p53 expression in naïve vs. sham groups and naïve vs. CCI groups, at 5 dpi. At 35 days, the difference was no longer statistically significant in all markers. The injury induced a decrease p21 expression vs. the naïve group, at 35 dpi. These results indicate the induction of a complex senescence response after immature brain injury. Some changes occur early and may reflect the activation/proliferation of non-neuronal cells post-injury that had been hindered, whereas changes such as p21 downregulation may reflect a delayed response and pro-repair processes.

Keywords: cellular senescence, traumatic brain injury, brain injury, controlled cortical impact

Procedia PDF Downloads 112