Search results for: total vector error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11500

Search results for: total vector error

11440 Support Vector Regression with Weighted Least Absolute Deviations

Authors: Kang-Mo Jung

Abstract:

Least squares support vector machine (LS-SVM) is a penalized regression which considers both fitting and generalization ability of a model. However, the squared loss function is very sensitive to even single outlier. We proposed a weighted absolute deviation loss function for the robustness of the estimates in least absolute deviation support vector machine. The proposed estimates can be obtained by a quadratic programming algorithm. Numerical experiments on simulated datasets show that the proposed algorithm is competitive in view of robustness to outliers.

Keywords: least absolute deviation, quadratic programming, robustness, support vector machine, weight

Procedia PDF Downloads 527
11439 Medical Error: Concept and Description According to Brazilian Physicians

Authors: Vitor S. Mendonca, Maria Luisa S. Schmidt

Abstract:

The Brazilian medical profession is viewed as being error-free, so healthcare professionals who commit an error are condemned there. Medical errors occur frequently in the Brazilian healthcare system, so identifying better options for handling this issue has become of interest primarily for physicians. The purpose of this study is to better understand the tensions involved in the fear of making an error due to the harm and risk this would represent for those involved. A qualitative study was performed by means of the narratives of the lived experiences of ten acting physicians in the State of Sao Paulo. The concept and characterization of errors were discussed, together with the fear of making an error, the near misses or error in itself, how to deal with errors and what to do to avoid them. The analysis indicates an excessive pressure in the medical profession for error-free practices, with a well-established physician-patient relationship to facilitate the management of medical errors. The error occurs, but a lack of information and discussion often leads to its concealment due to fear or possible judgment by society or peers. The establishment of programs that encourage appropriate medical conduct in the event of an error requires coherent answers for humanization in Brazilian medical science. It is necessary to improve the discussion about medical errors and disseminate models of communication and notification of errors in Brazil.

Keywords: medical error, narrative, physician-patient relationship, qualitative research

Procedia PDF Downloads 178
11438 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 322
11437 Comparisons of Surveying with Terrestrial Laser Scanner and Total Station for Volume Determination of Overburden and Coal Excavations in Large Open-Pit Mine

Authors: B. Keawaram, P. Dumrongchai

Abstract:

The volume of overburden and coal excavations in open-pit mine is generally determined by conventional survey such as total station. This study aimed to evaluate the accuracy of terrestrial laser scanner (TLS) used to measure overburden and coal excavations, and to compare TLS survey data sets with the data of the total station. Results revealed that, the reference points measured with the total station showed 0.2 mm precision for both horizontal and vertical coordinates. When using TLS on the same points, the standard deviations of 4.93 cm and 0.53 cm for horizontal and vertical coordinates, respectively, were achieved. For volume measurements covering the mining areas of 79,844 m2, TLS yielded the mean difference of about 1% and the surface error margin of 6 cm at the 95% confidence level when compared to the volume obtained by total station.

Keywords: mine, survey, terrestrial laser scanner, total station

Procedia PDF Downloads 384
11436 Variation of Refractive Errors among Right and Left Eyes in Jos, Plateau State, Nigeria

Authors: F. B. Masok, S. S Songdeg, R. R. Dawam

Abstract:

Vision is an important process for learning and communication as man depends greatly on vision to sense his environment. Prevalence and variation of refractive errors conducted between December 2010 and May 2011 in Jos, revealed that 735 (77.50%) out 950 subjects examined for refractive error had various refractive errors. Myopia was observed in 373 (49.79%) of the subjects, the error in the right eyes was 263 (55.60%) while the error in the left was 210(44.39%). The mean myopic error was found to be -1.54± 3.32. Hyperopia was observed in 385 (40.53%) of the sampled population comprising 203(52.73%) of the right eyes and 182(47.27%). The mean hyperopic error was found to be +1.74± 3.13. Astigmatism accounted for 359 (38.84%) of the subjects, out of which 193(53.76%) were in the right eyes while 168(46.79%) were in the left eyes. Presbyopia was found in 404(42.53%) of the subjects, of this figure, 164(40.59%) were in the right eyes while 240(59.41%) were in left eyes. The number of right eyes and left eyes with refractive errors was observed in some age groups to increase with age and later had its peak within 60 – 69 age groups. This pattern of refractive errors could be attributed to exposure to various forms of light particularly the ultraviolet rays (e.g rays from television and computer screen). There was no remarkable differences between the mean Myopic error and mean Hyperopic error in the right eyes and in the left eyes which suggest the right eye and the left eye are similar.

Keywords: left eye, refractive errors, right eye, variation

Procedia PDF Downloads 433
11435 Error Correction Method for 2D Ultra-Wideband Indoor Wireless Positioning System Using Logarithmic Error Model

Authors: Phornpat Chewasoonthorn, Surat Kwanmuang

Abstract:

Indoor positioning technologies have been evolved rapidly. They augment the Global Positioning System (GPS) which requires line-of-sight to the sky to track the location of people or objects. This study developed an error correction method for an indoor real-time location system (RTLS) based on an ultra-wideband (UWB) sensor from Decawave. Multiple stationary nodes (anchor) were installed throughout the workspace. The distance between stationary and moving nodes (tag) can be measured using a two-way-ranging (TWR) scheme. The result has shown that the uncorrected ranging error from the sensor system can be as large as 1 m. To reduce ranging error and thus increase positioning accuracy, This study purposes an online correction algorithm using the Kalman filter. The results from experiments have shown that the system can reduce ranging error down to 5 cm.

Keywords: indoor positioning, ultra-wideband, error correction, Kalman filter

Procedia PDF Downloads 160
11434 Government Final Consumption Expenditure and Household Consumption Expenditure NPISHS in Nigeria

Authors: Usman A. Usman

Abstract:

Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp (financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.

Keywords: government final consumption expenditure, household consumption expenditure, vector error correction model, cointegration

Procedia PDF Downloads 52
11433 Generalization of Tau Approximant and Error Estimate of Integral Form of Tau Methods for Some Class of Ordinary Differential Equations

Authors: A. I. Ma’ali, R. B. Adeniyi, A. Y. Badeggi, U. Mohammed

Abstract:

An error estimation of the integrated formulation of the Lanczos tau method for some class of ordinary differential equations was reported. This paper is concern with the generalization of tau approximants and their corresponding error estimates for some class of ordinary differential equations (ODEs) characterized by m + s =3 (i.e for m =1, s=2; m=2, s=1; and m=3, s=0) where m and s are the order of differential equations and number of overdetermination, respectively. The general result obtained were validated with some numerical examples.

Keywords: approximant, error estimate, tau method, overdetermination

Procedia PDF Downloads 606
11432 A Study on the Influence of Planet Pin Parallelism Error to Load Sharing Factor

Authors: Kyung Min Kang, Peng Mou, Dong Xiang, Yong Yang, Gang Shen

Abstract:

In this paper, planet pin parallelism error, which is one of manufacturing error of planet carrier, is employed as a main variable to influence planet load sharing factor. This error is categorize two group: (i) pin parallelism error with rotation on the axis perpendicular to the tangent of base circle of gear(x axis rotation in this paper) (ii) pin parallelism error with rotation on the tangent axis of base circle of gear(y axis rotation in this paper). For this study, the planetary gear system in 1.5MW wind turbine is applied and pure torsional rigid body model of this planetary gear is built using Solidworks and MSC.ADAMS. Based on quantified parallelism error and simulation model, dynamics simulation of planetary gear is carried out to obtain dynamic mesh load results with each type of error and load sharing factor is calculated with mesh load results. Load sharing factor formula and the suggestion for planetary reliability design is proposed with the conclusion of this study.

Keywords: planetary gears, planet load sharing, MSC. ADAMS, parallelism error

Procedia PDF Downloads 399
11431 Government Final Consumption Expenditure Financial Deepening and Household Consumption Expenditure NPISHs in Nigeria

Authors: Usman A. Usman

Abstract:

Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.

Keywords: household, government expenditures, vector error correction model, johansen test

Procedia PDF Downloads 61
11430 Solving Linear Systems Involved in Convex Programming Problems

Authors: Yixun Shi

Abstract:

Many interior point methods for convex programming solve an (n+m)x(n+m)linear system in each iteration. Many implementations solve this system in each iteration by considering an equivalent mXm system (4) as listed in the paper, and thus the job is reduced into solving the system (4). However, the system(4) has to be solved exactly since otherwise the error would be entirely passed onto the last m equations of the original system. Often the Cholesky factorization is computed to obtain the exact solution of (4). One Cholesky factorization is to be done in every iteration, resulting in higher computational costs. In this paper, two iterative methods for solving linear systems using vector division are combined together and embedded into interior point methods. Instead of computing one Cholesky factorization in each iteration, it requires only one Cholesky factorization in the entire procedure, thus significantly reduces the amount of computation needed for solving the problem. Based on that, a hybrid algorithm for solving convex programming problems is proposed.

Keywords: convex programming, interior point method, linear systems, vector division

Procedia PDF Downloads 402
11429 A Comparative Study of Series-Connected Two-Motor Drive Fed by a Single Inverter

Authors: A. Djahbar, E. Bounadja, A. Zegaoui, H. Allouache

Abstract:

In this paper, vector control of a series-connected two-machine drive system fed by a single inverter (CSI/VSI) is presented. The two stator windings of both machines are connected in series while the rotors may be connected to different loads, are called series-connected two-machine drive. Appropriate phase transposition is introduced while connecting the series stator winding to obtain decoupled control the two-machines. The dynamic decoupling of each machine from the group is obtained using the vector control algorithm. The independent control is demonstrated by analyzing the characteristics of torque and speed of each machine obtained via simulation under vector control scheme. The viability of the control techniques is proved using analytically and simulation approach.

Keywords: drives, inverter, multi-phase induction machine, vector control

Procedia PDF Downloads 480
11428 Diagonal Vector Autoregressive Models and Their Properties

Authors: Usoro Anthony E., Udoh Emediong

Abstract:

Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models.

Keywords: VAR models, diagonal VAR models, variance, autocovariance, autocorrelations

Procedia PDF Downloads 116
11427 Machine Learning Approach for Automating Electronic Component Error Classification and Detection

Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski

Abstract:

The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.

Keywords: augmented reality, machine learning, object recognition, virtual laboratories

Procedia PDF Downloads 134
11426 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 174
11425 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 149
11424 Improving Cheon-Kim-Kim-Song (CKKS) Performance with Vector Computation and GPU Acceleration

Authors: Smaran Manchala

Abstract:

Homomorphic Encryption (HE) enables computations on encrypted data without requiring decryption, mitigating data vulnerability during processing. Usable Fully Homomorphic Encryption (FHE) could revolutionize secure data operations across cloud computing, AI training, and healthcare, providing both privacy and functionality, however, the computational inefficiency of schemes like Cheon-Kim-Kim-Song (CKKS) hinders their widespread practical use. This study focuses on optimizing CKKS for faster matrix operations through the implementation of vector computation parallelization and GPU acceleration. The variable effects of vector parallelization on GPUs were explored, recognizing that while parallelization typically accelerates operations, it could introduce overhead that results in slower runtimes, especially in smaller, less computationally demanding operations. To assess performance, two neural network models, MLPN and CNN—were tested on the MNIST dataset using both ARM and x86-64 architectures, with CNN chosen for its higher computational demands. Each test was repeated 1,000 times, and outliers were removed via Z-score analysis to measure the effect of vector parallelization on CKKS performance. Model accuracy was also evaluated under CKKS encryption to ensure optimizations did not compromise results. According to the results of the trail runs, applying vector parallelization had a 2.63X efficiency increase overall with a 1.83X performance increase for x86-64 over ARM architecture. Overall, these results suggest that the application of vector parallelization in tandem with GPU acceleration significantly improves the efficiency of CKKS even while accounting for vector parallelization overhead, providing impact in future zero trust operations.

Keywords: CKKS scheme, runtime efficiency, fully homomorphic encryption (FHE), GPU acceleration, vector parallelization

Procedia PDF Downloads 23
11423 Using New Machine Algorithms to Classify Iranian Musical Instruments According to Temporal, Spectral and Coefficient Features

Authors: Ronak Khosravi, Mahmood Abbasi Layegh, Siamak Haghipour, Avin Esmaili

Abstract:

In this paper, a study on classification of musical woodwind instruments using a small set of features selected from a broad range of extracted ones by the sequential forward selection method was carried out. Firstly, we extract 42 features for each record in the music database of 402 sound files belonging to five different groups of Flutes (end blown and internal duct), Single –reed, Double –reed (exposed and capped), Triple reed and Quadruple reed. Then, the sequential forward selection method is adopted to choose the best feature set in order to achieve very high classification accuracy. Two different classification techniques of support vector machines and relevance vector machines have been tested out and an accuracy of up to 96% can be achieved by using 21 time, frequency and coefficient features and relevance vector machine with the Gaussian kernel function.

Keywords: coefficient features, relevance vector machines, spectral features, support vector machines, temporal features

Procedia PDF Downloads 320
11422 A Deletion-Cost Based Fast Compression Algorithm for Linear Vector Data

Authors: Qiuxiao Chen, Yan Hou, Ning Wu

Abstract:

As there are deficiencies of the classic Douglas-Peucker Algorithm (DPA), such as high risks of deleting key nodes by mistake, high complexity, time consumption and relatively slow execution speed, a new Deletion-Cost Based Compression Algorithm (DCA) for linear vector data was proposed. For each curve — the basic element of linear vector data, all the deletion costs of its middle nodes were calculated, and the minimum deletion cost was compared with the pre-defined threshold. If the former was greater than or equal to the latter, all remaining nodes were reserved and the curve’s compression process was finished. Otherwise, the node with the minimal deletion cost was deleted, its two neighbors' deletion costs were updated, and the same loop on the compressed curve was repeated till the termination. By several comparative experiments using different types of linear vector data, the comparison between DPA and DCA was performed from the aspects of compression quality and computing efficiency. Experiment results showed that DCA outperformed DPA in compression accuracy and execution efficiency as well.

Keywords: Douglas-Peucker algorithm, linear vector data, compression, deletion cost

Procedia PDF Downloads 251
11421 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia

Authors: The Danh Phan

Abstract:

House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.

Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise

Procedia PDF Downloads 230
11420 Artificial Neural Networks Based Calibration Approach for Six-Port Receiver

Authors: Nadia Chagtmi, Nejla Rejab, Noureddine Boulejfen

Abstract:

This paper presents a calibration approach based on artificial neural networks (ANN) to determine the envelop signal (I+jQ) of a six-port based receiver (SPR). The memory effects called also dynamic behavior and the nonlinearity brought by diode based power detector have been taken into consideration by the ANN. Experimental set-up has been performed to validate the efficiency of this method. The efficiency of this approach has been confirmed by the obtained results in terms of waveforms. Moreover, the obtained error vector magnitude (EVM) and the mean absolute error (MAE) have been calculated in order to confirm and to test the ANN’s performance to achieve I/Q recovery using the output voltage detected by the power based detector. The baseband signal has been recovered using ANN with EVMs no higher than 1 % and an MAE no higher than 17, 26 for the SPR excited different type of signals such QAM (quadrature amplitude modulation) and LTE (Long Term Evolution).

Keywords: six-port based receiver; calibration, nonlinearity, memory effect, artificial neural network

Procedia PDF Downloads 76
11419 Using Cooperation Approaches at Different Levels of Artificial Bee Colony Method

Authors: Vahid Zeighami, Mohsen Ghsemi, Reza Akbari

Abstract:

In this work, a Multi-Level Artificial Bee Colony (called MLABC) is presented. In MLABC two species are used. The first species employs n colonies in which each of the them optimizes the complete solution vector. The cooperation between these colonies is carried out by exchanging information through a leader colony, which contains a set of elite bees. The second species uses a cooperative approach in which the complete solution vector is divided to k sub-vectors, and each of these sub-vectors is optimized by a a colony. The cooperation between these colonies is carried out by compiling sub-vectors into the complete solution vector. Finally, the cooperation between two species is obtained by exchanging information between them. The proposed algorithm is tested on a set of well known test functions. The results show that MLABC algorithms provide efficiency and robustness to solve numerical functions.

Keywords: artificial bee colony, cooperative, multilevel cooperation, vector

Procedia PDF Downloads 446
11418 A Study on the Influence of Pin-Hole Position Error of Carrier on Mesh Load and Planet Load Sharing of Planetary Gear

Authors: Kyung Min Kang, Peng Mou, Dong Xiang, Gang Shen

Abstract:

For planetary gear system, Planet pin-hole position accuracy is one of most influential factor to efficiency and reliability of planetary gear system. This study considers planet pin-hole position error as a main input error for model and build multi body dynamic simulation model of planetary gear including planet pin-hole position error using MSC. ADAMS. From this model, the mesh load results between meshing gears in each pin-hole position error cases are obtained and based on these results, planet load sharing factor which reflect equilibrium state of mesh load sharing between whole meshing gear pair is calculated. Analysis result indicates that the pin-hole position error of tangential direction cause profound influence to mesh load and load sharing factor between meshing gear pair.

Keywords: planetary gear, load sharing factor, multibody dynamics, pin-hole position error

Procedia PDF Downloads 578
11417 Estimation of Slab Depth, Column Size and Rebar Location of Concrete Specimen Using Impact Echo Method

Authors: Y. T. Lee, J. H. Na, S. H. Kim, S. U. Hong

Abstract:

In this study, an experimental research for estimation of slab depth, column size and location of rebar of concrete specimen is conducted using the Impact Echo Method (IE) based on stress wave among non-destructive test methods. Estimation of slab depth had total length of 1800×300 and 6 different depths including 150 mm, 180 mm, 210 mm, 240 mm, 270 mm and 300 mm. The concrete column specimen was manufactured by differentiating the size into 300×300×300 mm, 400×400×400 mm and 500×500×500 mm. In case of the specimen for estimation of rebar, rebar of ∅22 mm was used in a specimen of 300×370×200 and arranged at 130 mm and 150 mm from the top to the rebar top. As a result of error rate of slab depth was overall mean of 3.1%. Error rate of column size was overall mean of 1.7%. Mean error rate of rebar location was 1.72% for top, 1.19% for bottom and 1.5% for overall mean showing relative accuracy.

Keywords: impact echo method, estimation, slab depth, column size, rebar location, concrete

Procedia PDF Downloads 351
11416 An Efficient Algorithm of Time Step Control for Error Correction Method

Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim

Abstract:

The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.

Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points

Procedia PDF Downloads 573
11415 Calculating Quantity of Steel Bar Placed in Mesh Form in a Circular Slab or Dome

Authors: Karam Chand Gupta

Abstract:

When steel reinforcement is placed in mesh form in circular concrete slab at base or domes at top in case of over head service reservoir or any other structure, it is difficult to estimate/measure the total quantity of steel that would be needed or placed. For the purpose of calculating the total length of the steel bars, at present, the practice is – the length of each bar is measured and then added up. This is tiresome and time consuming process. I have derived a mathematics formula with the help of which we can calculate in one line the quantity of total steel that will be needed. This will not only make it easy and time saving but also avoids any error in making entries and calculations.

Keywords: dome, mesh, slab, steel

Procedia PDF Downloads 681
11414 Review on Quaternion Gradient Operator with Marginal and Vector Approaches for Colour Edge Detection

Authors: Nadia Ben Youssef, Aicha Bouzid

Abstract:

Gradient estimation is one of the most fundamental tasks in the field of image processing in general, and more particularly for color images since that the research in color image gradient remains limited. The widely used gradient method is Di Zenzo’s gradient operator, which is based on the measure of squared local contrast of color images. The proposed gradient mechanism, presented in this paper, is based on the principle of the Di Zenzo’s approach using quaternion representation. This edge detector is compared to a marginal approach based on multiscale product of wavelet transform and another vector approach based on quaternion convolution and vector gradient approach. The experimental results indicate that the proposed color gradient operator outperforms marginal approach, however, it is less efficient then the second vector approach.

Keywords: gradient, edge detection, color image, quaternion

Procedia PDF Downloads 234
11413 The Tracking and Hedging Performances of Gold ETF Relative to Some Other Instruments in the UK

Authors: Abimbola Adedeji, Ahmad Shauqi Zubir

Abstract:

This paper examines the profitability and risk between investing in gold exchange traded funds (ETFs) and gold mutual funds compares to gold prices. The main focus in determining whether there are similarities or differences between those financial products is the tracking error. The importance of understanding the similarities or differences between the gold ETFs, gold mutual funds and gold prices is derived from the fact that gold ETFs and gold mutual funds are used as substitutions for investors who are looking to profit from gold prices although they are short in capital. 10 hypotheses were tested. There are 3 types of tracking error used. Tracking error 1 and 3 gives results that differentiate between types of ETFs and mutual funds, hence yielding the answers in answering the hypotheses that were developed. However, tracking error 2 failed to give the answer that could shed light on the questions raised in this study. All of the results in tracking error 2 technique only telling us that the difference between the ups and downs of the financial instruments are similar, statistically to the physical gold prices movement.

Keywords: gold etf, gold mutual funds, tracking error

Procedia PDF Downloads 422
11412 Forecasting of Grape Juice Flavor by Using Support Vector Regression

Authors: Ren-Jieh Kuo, Chun-Shou Huang

Abstract:

The research of juice flavor forecasting has become more important in China. Due to the fast economic growth in China, many different kinds of juices have been introduced to the market. If a beverage company can understand their customers’ preference well, the juice can be served more attractively. Thus, this study intends to introduce the basic theory and computing process of grapes juice flavor forecasting based on support vector regression (SVR). Applying SVR, BPN and LR to forecast the flavor of grapes juice in real data, the result shows that SVR is more suitable and effective at predicting performance.

Keywords: flavor forecasting, artificial neural networks, Support Vector Regression, China

Procedia PDF Downloads 492
11411 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications

Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha

Abstract:

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization

Procedia PDF Downloads 353