Search results for: Data cutting and sorting method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13784

Search results for: Data cutting and sorting method

13214 Data Embedding Based on Better Use of Bits in Image Pixels

Authors: Rehab H. Alwan, Fadhil J. Kadhim, Ahmad T. Al-Taani

Abstract:

In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.

Keywords: Image embedding, Edge detection, gray level connectivity, information hiding, digital image compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139
13213 Using Data Mining Methodology to Build the Predictive Model of Gold Passbook Price

Authors: Chien-Hui Yang, Che-Yang Lin, Ya-Chen Hsu

Abstract:

Gold passbook is an investing tool that is especially suitable for investors to do small investment in the solid gold. The gold passbook has the lower risk than other ways investing in gold, but its price is still affected by gold price. However, there are many factors can cause influences on gold price. Therefore, building a model to predict the price of gold passbook can both reduce the risk of investment and increase the benefits. This study investigates the important factors that influence the gold passbook price, and utilize the Group Method of Data Handling (GMDH) to build the predictive model. This method can not only obtain the significant variables but also perform well in prediction. Finally, the significant variables of gold passbook price, which can be predicted by GMDH, are US dollar exchange rate, international petroleum price, unemployment rate, whole sale price index, rediscount rate, foreign exchange reserves, misery index, prosperity coincident index and industrial index.

Keywords: Gold price, Gold passbook price, Group Method ofData Handling (GMDH), Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278
13212 Determination and Comparison of Fabric Pills Distribution Using Image Processing and Spatial Data Analysis Tools

Authors: Lenka Techniková, Maroš Tunák, Jiří Janáček

Abstract:

This work deals with the determination and comparison of pill patterns in 2 sets of fabric samples which differ in way of pill creation. The first set contains fabric samples with the pills created by simulation on a Martindale abrasion machine, while pills in the second set originated during normal wearing and maintenance. The goal of the study is to determine whether the pattern of the fabric pills created by simulation is the same as the pattern of naturally occurring pills. The system of determination and comparison of the pills is based on image processing and spatial data analysis tools. Firstly, 3D reconstruction of the fabric surfaces with the pills is realized with using a gradient fields method. The gradient fields method creates a 3D fabric surface from a set of 4 images. Thereafter, the pills are detected in 3D fabric surfaces using image-processing tools in the MATLAB software. Determination and comparison of the pills patterns of two sets of fabric samples is based on spatial data analysis using tools in R software.

Keywords: 3D reconstruction of the surface, image analysis tools, distribution of the pills, spatial data analysis tools.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
13211 Sustainability Assessment of Agriculture and Biodiversity Issues through an Innovative Knowledge Mediation System Using Deliberation Support Tools and INTEGRAAL Method Based on Stakeholder Involvement

Authors: Ashiquer Rahman

Abstract:

The cutting edge knowledge mediation system called ‘ePLANETe’ provides a framework for building knowledge, tools, and methods for education, research, and sustainable practices, as well as the deliberative assessment support for Higher Education, Research Institutions, and elsewhere e.g., the collaborative learning and research on sustainability and biodiversity issues of territorial development sectors. The paper is to present the analytical perspective of the ‘ePLANETe’ concept and functionalities as an experimental platform for contributing to sustainability assessment. Now the ‘ePLANETe’ can be seen as experimentation of the challenges of “ICT for Green”. The digital technologies of ‘ePLANETe’ are exploited (i) to facilitate collaborative research, learning tools, and knowledge for sustainability challenges, and (ii) as deliberation support tools in pursuing of sustainability performance and practices in territorial governance, public policy, and business strategy, as well as in the higher education sectors itself. The paper investigates the dealing capacity of qualitative and quantitative assessment of agriculture sustainability through the stakeholder-based integrated assessment. Specifically, this paper focuses on integrating system methodologies with Deliberation Support Tools (DST) and INTEGRAAL method for collective assessment and decision-making in implementing regional plans. The report aims to identify the effective knowledge and tools to enable deliberations methodologies regarding practices on the sustainability of agriculture and biodiversity issues, societal responsibilities, and regional planning, concentrating on the question: “How to effectively mobilize resources (knowledge, tools, and methods) from different sources and at different scales regarding on agriculture and biodiversity issues to address sustainability challenges” that will create the scope for qualitative and quantitative assessments of sustainability as a new landmark of the agriculture sector.

Keywords: Biodiversity, Deliberation Support Tools, INTEGRAAL, stakeholder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 254
13210 A Kernel Based Rejection Method for Supervised Classification

Authors: Abdenour Bounsiar, Edith Grall, Pierre Beauseroy

Abstract:

In this paper we are interested in classification problems with a performance constraint on error probability. In such problems if the constraint cannot be satisfied, then a rejection option is introduced. For binary labelled classification, a number of SVM based methods with rejection option have been proposed over the past few years. All of these methods use two thresholds on the SVM output. However, in previous works, we have shown on synthetic data that using thresholds on the output of the optimal SVM may lead to poor results for classification tasks with performance constraint. In this paper a new method for supervised classification with rejection option is proposed. It consists in two different classifiers jointly optimized to minimize the rejection probability subject to a given constraint on error rate. This method uses a new kernel based linear learning machine that we have recently presented. This learning machine is characterized by its simplicity and high training speed which makes the simultaneous optimization of the two classifiers computationally reasonable. The proposed classification method with rejection option is compared to a SVM based rejection method proposed in recent literature. Experiments show the superiority of the proposed method.

Keywords: rejection, Chow's rule, error-reject tradeoff, SupportVector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
13209 Effect of the Workpiece Position on the Manufacturing Tolerances

Authors: M. Rahou, F. Sebaa, A. Cheikh

Abstract:

Manufacturing tolerancing is intended to determine the intermediate geometrical and dimensional states of the part during its manufacturing process. These manufacturing dimensions also serve to satisfy not only the functional requirements given in the definition drawing, but also the manufacturing constraints, for example geometrical defects of the machine, vibration and the wear of the cutting tool. The choice of positioning has an important influence on the cost and quality of manufacture. To avoid this problem, a two-step approach has been developed. The first step is dedicated to the determination of the optimum position. As for the second step, a study was carried out for the tightening effect on the tolerance interval.

Keywords: Dispersion, tolerance, manufacturing, position.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
13208 A New Preconditioned AOR Method for Z-matrices

Authors: Guangbin Wang, Ning Zhang, Fuping Tan

Abstract:

In this paper, we present a preconditioned AOR-type iterative method for solving the linear systems Ax = b, where A is a Z-matrix. And give some comparison theorems to show that the rate of convergence of the preconditioned AOR-type iterative method is faster than the rate of convergence of the AOR-type iterative method.

Keywords: Z-matrix, AOR-type iterative method, precondition, comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
13207 A Family of Improved Secant-Like Method with Super-Linear Convergence

Authors: Liang Chen

Abstract:

A family of improved secant-like method is proposed in this paper. Further, the analysis of the convergence shows that this method has super-linear convergence. Efficiency are demonstrated by numerical experiments when the choice of α is correct.

Keywords: Nonlinear equations, Secant method, Convergence order, Secant-like method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042
13206 A Multi-Objective Evolutionary Algorithm of Neural Network for Medical Diseases Problems

Authors: Sultan Noman Qasem

Abstract:

This paper presents an evolutionary algorithm for solving multi-objective optimization problems-based artificial neural network (ANN). The multi-objective evolutionary algorithm used in this study is genetic algorithm while ANN used is radial basis function network (RBFN). The proposed algorithm named memetic elitist Pareto non-dominated sorting genetic algorithm-based RBFN (MEPGAN). The proposed algorithm is implemented on medical diseases problems. The experimental results indicate that the proposed algorithm is viable, and provides an effective means to design multi-objective RBFNs with good generalization capability and compact network structure. This study shows that MEPGAN generates RBFNs coming with an appropriate balance between accuracy and simplicity, comparing to the other algorithms found in literature.

Keywords: Radial basis function network, Hybrid learning, Multi-objective optimization, Genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247
13205 Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.

Keywords: Fractal image compression, IFS, PIFS, PSNR, Quadtree partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
13204 Feature Subset Selection approach based on Maximizing Margin of Support Vector Classifier

Authors: Khin May Win, Nan Sai Moon Kham

Abstract:

Identification of cancer genes that might anticipate the clinical behaviors from different types of cancer disease is challenging due to the huge number of genes and small number of patients samples. The new method is being proposed based on supervised learning of classification like support vector machines (SVMs).A new solution is described by the introduction of the Maximized Margin (MM) in the subset criterion, which permits to get near the least generalization error rate. In class prediction problem, gene selection is essential to improve the accuracy and to identify genes for cancer disease. The performance of the new method was evaluated with real-world data experiment. It can give the better accuracy for classification.

Keywords: Microarray data, feature selection, recursive featureelimination, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
13203 New Newton's Method with Third-order Convergence for Solving Nonlinear Equations

Authors: Osama Yusuf Ababneh

Abstract:

For the last years, the variants of the Newton-s method with cubic convergence have become popular iterative methods to find approximate solutions to the roots of non-linear equations. These methods both enjoy cubic convergence at simple roots and do not require the evaluation of second order derivatives. In this paper, we present a new Newton-s method based on contra harmonic mean with cubically convergent. Numerical examples show that the new method can compete with the classical Newton's method.

Keywords: Third-order convergence, non-linear equations, root finding, iterative method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2958
13202 Ensembling Adaptively Constructed Polynomial Regression Models

Authors: Gints Jekabsons

Abstract:

The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.

Keywords: Basis function construction, heuristic search, modelensembles, polynomial regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
13201 Improved IDR(s) Method for Gaining Very Accurate Solutions

Authors: Yusuke Onoue, Seiji Fujino, Norimasa Nakashima

Abstract:

The IDR(s) method based on an extended IDR theorem was proposed by Sonneveld and van Gijzen. The original IDR(s) method has excellent property compared with the conventional iterative methods in terms of efficiency and small amount of memory. IDR(s) method, however, has unexpected property that relative residual 2-norm stagnates at the level of less than 10-12. In this paper, an effective strategy for stagnation detection, stagnation avoidance using adaptively information of parameter s and improvement of convergence rate itself of IDR(s) method are proposed in order to gain high accuracy of the approximated solution of IDR(s) method. Through numerical experiments, effectiveness of adaptive tuning IDR(s) method is verified and demonstrated.

Keywords: Krylov subspace methods, IDR(s), adaptive tuning, stagnation of relative residual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
13200 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: Road accident, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1124
13199 Multi-Objective Optimization of Gas Turbine Power Cycle

Authors: Mohsen Nikaein

Abstract:

Because of importance of energy, optimization of power generation systems is necessary. Gas turbine cycles are suitable manner for fast power generation, but their efficiency is partly low. In order to achieving higher efficiencies, some propositions are preferred such as recovery of heat from exhaust gases in a regenerator, utilization of intercooler in a multistage compressor, steam injection to combustion chamber and etc. However thermodynamic optimization of gas turbine cycle, even with above components, is necessary. In this article multi-objective genetic algorithms are employed for Pareto approach optimization of Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are entropy generation of RIGT cycle (Ns) derives using Exergy Analysis and Gouy-Stodola theorem, thermal efficiency and the net output power of RIGT Cycle. These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters such as compressor pressure ratio (Rp), excess air in combustion (EA), turbine inlet temperature (TIT) and inlet air temperature (T0). At the first stage single objective optimization has been investigated and the method of Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used for multi-objective optimization. Optimization procedures are performed for two and three objective functions and the results are compared for RIGT Cycle. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of three objective optimization the results are given in tables.

Keywords: Exergy, Entropy Generation, Brayton Cycle, DesignParameters, Optimization, Genetic Algorithm, Multi-Objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2518
13198 Development of Decision Support System for House Evaluation and Purchasing

Authors: Chia-Yu Hsu, Julaimin Goh, Pei-Chann Chang

Abstract:

Home is important for Chinese people. Because the information regarding the house attributes and surrounding environments is incomplete in most real estate agency, most house buyers are difficult to consider the overall factors effectively and only can search candidates by sorting-based approach. This study aims to develop a decision support system for housing purchasing, in which surrounding facilities of each house are quantified. Then, all considered house factors and customer preferences are incorporated into Simple Multi-Attribute Ranking Technique (SMART) to support the housing evaluation. To evaluate the validity of proposed approach, an empirical study was conducted from a real estate agency. Based on the customer requirement and preferences, the proposed approach can identify better candidate house with consider the overall house attributes and surrounding facilities.

Keywords: decision support system, real estate, decision analysis, housing evaluation, SMART

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
13197 Fuzzy Metric Approach for Fuzzy Time Series Forecasting based on Frequency Density Based Partitioning

Authors: Tahseen Ahmed Jilani, Syed Muhammad Aqil Burney, C. Ardil

Abstract:

In the last 15 years, a number of methods have been proposed for forecasting based on fuzzy time series. Most of the fuzzy time series methods are presented for forecasting of enrollments at the University of Alabama. However, the forecasting accuracy rates of the existing methods are not good enough. In this paper, we compared our proposed new method of fuzzy time series forecasting with existing methods. Our method is based on frequency density based partitioning of the historical enrollment data. The proposed method belongs to the kth order and time-variant methods. The proposed method can get the best forecasting accuracy rate for forecasting enrollments than the existing methods.

Keywords: Fuzzy logical groups, fuzzified enrollments, fuzzysets, fuzzy time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3215
13196 Comparison of Detached Eddy Simulations with Turbulence Modeling

Authors: Muhammad Amjad Sohail, Prof. Yan Chao, Mukkarum Husain

Abstract:

Flow field around hypersonic vehicles is very complex and difficult to simulate. The boundary layers are squeezed between shock layer and body surface. Resolution of boundary layer, shock wave and turbulent regions where the flow field has high values is difficult of capture. Detached eddy simulation (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid body boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore the grid resolution is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. In this research study hypersonic flow is simulated at Mach 8 and different angle of attacks to resolve the proper boundary layers and discontinuities. The flow is also simulated in the long wake regions. Mesh is little different than RANS simulations and it is made dense near the boundary layers and in the wake regions to resolve it properly. Hypersonic blunt cone cylinder body with frustrum at angle 5o and 10 o are simulated and there aerodynamics study is performed to calculate aerodynamics characteristics of different geometries. The results and then compared with experimental as well as with some turbulence model (SA Model). The results achieved with DES simulation have very good resolution as well as have excellent agreement with experimental and available data. Unsteady simulations are performed for DES calculations by using duel time stepping method or implicit time stepping. The simulations are performed at Mach number 8 and angle of attack from 0o to 10o for all these cases. The results and resolutions for DES model found much better than SA turbulence model.

Keywords: Detached eddy simulation, dual time stepping, hypersonic flow, turbulence modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2346
13195 Topology-Based Character Recognition Method for Coin Date Detection

Authors: Xingyu Pan, Laure Tougne

Abstract:

For recognizing coins, the graved release date is important information to identify precisely its monetary type. However, reading characters in coins meets much more obstacles than traditional character recognition tasks in the other fields, such as reading scanned documents or license plates. To address this challenging issue in a numismatic context, we propose a training-free approach dedicated to detection and recognition of the release date of the coin. In the first step, the date zone is detected by comparing histogram features; in the second step, a topology-based algorithm is introduced to recognize coin numbers with various font types represented by binary gradient map. Our method obtained a recognition rate of 92% on synthetic data and of 44% on real noised data.

Keywords: Coin, detection, character recognition, topology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
13194 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1304
13193 Method for Tuning Level Control Loops Based on Internal Model Control and Closed Loop Step Test Data

Authors: Arnaud Nougues

Abstract:

This paper describes a two-stage methodology derived from IMC (Internal Model Control) for tuning a PID (Proportional-Integral-Derivative) controller for levels or other integrating processes in an industrial environment. Focus is ease of use and implementation speed which are critical for an industrial application. Tuning can be done with minimum effort and without the need of time-consuming open-loop step tests on the plant. The first stage of the method applies to levels only: the vessel residence time is calculated from equipment dimensions and used to derive a set of preliminary PI (Proportional-Integral) settings with IMC. The second stage, re-tuning in closed-loop, applies to levels as well as other integrating processes: a tuning correction mechanism has been developed based on a series of closed-loop simulations with model errors. The tuning correction is done from a simple closed-loop step test and application of a generic correlation between observed overshoot and integral time correction. A spin-off of the method is that an estimate of the vessel residence time (levels) or open-loop process gain (other integrating process) is obtained from the closed-loop data.

Keywords: closed-loop model identification, IMC-PID tuning method, integrating process control, on-line PID tuning adaptation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 568
13192 Automated Method Time Measurement System for Redesigning Dynamic Facility Layout

Authors: Salam Alzubaidi, G. Fantoni, F. Failli, M. Frosolini

Abstract:

The dynamic facility layout problem is a really critical issue in the competitive industrial market; thus, solving this problem requires robust design and effective simulation systems. The sustainable simulation requires inputting reliable and accurate data into the system. So this paper describes an automated system integrated into the real environment to measure the duration of the material handling operations, collect the data in real-time, and determine the variances between the actual and estimated time schedule of the operations in order to update the simulation software and redesign the facility layout periodically. The automated method- time measurement system collects the real data through using Radio Frequency-Identification (RFID) and Internet of Things (IoT) technologies. Hence, attaching RFID- antenna reader and RFID tags enables the system to identify the location of the objects and gathering the time data. The real duration gathered will be manipulated by calculating the moving average duration of the material handling operations, choosing the shortest material handling path, and then updating the simulation software to redesign the facility layout accommodating with the shortest/real operation schedule. The periodic simulation in real-time is more sustainable and reliable than the simulation system relying on an analysis of historical data. The case study of this methodology is in cooperation with a workshop team for producing mechanical parts. Although there are some technical limitations, this methodology is promising, and it can be significantly useful in the redesigning of the manufacturing layout.

Keywords: Dynamic facility layout problem, internet of things, method time measurement, radio frequency identification, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597
13191 A Decision Matrix for the Evaluation of Triplestores for Use in a Virtual Research Environment

Authors: Tristan O’Neill, Trina Myers, Jarrod Trevathan

Abstract:

The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for cross-domain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.

Keywords: Virtual research environment, Semantic Web, performance analysis, tropical data hub.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
13190 Denosing ECG using Translation Invariant Multiwavelet

Authors: Jeong Yup Han, Su Kyung Lee, Hong Bae Park

Abstract:

In this paper, we propose a method to reduce the various kinds of noise while gathering and recording the electrocardiogram (ECG) signal. Because of the defects of former method in the noise elimination of ECG signal, we use translation invariant (TI) multiwavelet denoising method to the noise elimination. The advantage of the proposed method is that it may not only remain the geometrical characteristics of the original ECG signal and keep the amplitudes of various ECG waveforms efficiently, but also suppress impulsive noise to some extent. The simulation results indicate that the proposed method are better than former removing noise method in aspects of remaining geometrical characteristics of ECG signal and the signal-to-noise ratio (SNR).

Keywords: ECG, TI multiwavelet, denoise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
13189 DEA Method for Evaluation of EU Performance

Authors: M. Staníčková

Abstract:

The paper deals with an application of quantitative analysis – the Data Envelopment Analysis (DEA) method to performance evaluation of the European Union Member States, in the reference years 2000 and 2011. The main aim of the paper is to measure efficiency changes over the reference years and to analyze a level of productivity in individual countries based on DEA method and to classify the EU Member States to homogeneous units (clusters) according to efficiency results. The theoretical part is devoted to the fundamental basis of performance theory and the methodology of DEA. The empirical part is aimed at measuring degree of productivity and level of efficiency changes of evaluated countries by basic DEA model – CCR CRS model, and specialized DEA approach – the Malmquist Index measuring the change of technical efficiency and the movement of production possibility frontier. Here, DEA method becomes a suitable tool for setting a competitive/uncompetitive position of each country because there is not only one factor evaluated, but a set of different factors that determine the degree of economic development.

Keywords: CCR CRS model, cluster analysis, DEA method, efficiency, EU, Malmquist index, performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2614
13188 An Evaluation Method of Accelerated Storage Life Test for Typical Mechanical and Electronic Products

Authors: Jinyong Yao, Hongzhi Li, Chao Du, Jiao Li

Abstract:

Reliability of long-term storage products is related to the availability of the whole system, and the evaluation of storage life is of great necessity. These products are usually highly reliable and little failure information can be collected. In this paper, an analytical method based on data from accelerated storage life test is proposed to evaluate the reliability index of the long-term storage products. Firstly, singularities are eliminated by data normalization and residual analysis. Secondly, with the preprocessed data, the degradation path model is built to obtain the pseudo life values. Then by life distribution hypothesis, we can get the estimator of parameters in high stress levels and verify failure mechanism consistency. Finally, the life distribution under the normal stress level is extrapolated via the acceleration model and evaluation of the actual average life is available. An application example with the camera stabilization device is provided to illustrate the methodology we proposed.

Keywords: Accelerated storage life test, failure mechanism consistency, life distribution, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2276
13187 The Comparison of Data Replication in Distributed Systems

Authors: Iman Zangeneh, Mostafa Moradi, Ali Mokhtarbaf

Abstract:

The necessity of ever-increasing use of distributed data in computer networks is obvious for all. One technique that is performed on the distributed data for increasing of efficiency and reliablity is data rplication. In this paper, after introducing this technique and its advantages, we will examine some dynamic data replication. We will examine their characteristies for some overus scenario and the we will propose some suggestion for their improvement.

Keywords: data replication, data hiding, consistency, dynamicdata replication strategy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
13186 Direct Method for Converting FIR Filter with Low Nonzero Tap into IIR Filter

Authors: Jeong Hye Moon, Byung Hoon Kang, PooGyeon Park

Abstract:

In this paper, we proposed the direct method for converting Finite-Impulse Response (FIR) filter with low nonzero tap into Infinite-Impulse Response (IIR) filter using the pre-determined table. The prony method is used by ghost cancellator which is IIR approximation to FIR filter which is better performance than IIR and have much larger calculation difference. The direct method for many ghost combination with low nonzero tap of NTSC(National Television System Committee) TV signal in Korea is described. The proposed method is illustrated with an example.

Keywords: NTSC, Ghost cancellation, FIR, IIR, Prony method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3141
13185 Holomorphic Prioritization of Sets within Decagram of Strategic Decision Making of POSM Using Operational Research (OR): Analytic Hierarchy Process (AHP) Analysis

Authors: Elias O. Tembe, Hussain A. Al-Salamin

Abstract:

There is decagram of strategic decisions of operations and production/service management (POSM) within operational research (OR) which must collate, namely: design, inventory, quality, location, process and capacity, layout, scheduling, maintain ace, and supply chain. This paper presents an architectural configuration conceptual framework of a decagram of sets decisions in a form of mathematical complete graph and abelian graph. Mathematically, a complete graph is undirected (UDG), and directed (DG) a relationship where every pair of vertices is connected, collated, confluent, and holomorphic. There has not been any study conducted which, however, prioritizes the holomorphic sets which of POMS within OR field of study. The study utilizes OR structured technique known as The Analytic Hierarchy Process (AHP) analysis for organizing, sorting and prioritizing(ranking) the sets within the decagram of POMS according to their attribution (propensity), and provides an analysis how the prioritization has real-world application within the 21st century.

Keywords: AHP analysis, Decagram, Decagon, Holomorphic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996