Search results for: Heterogeneous Earliest Finish Time (HEFT) algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9216

Search results for: Heterogeneous Earliest Finish Time (HEFT) algorithm

7656 Discovery of Production Rules with Fuzzy Hierarchy

Authors: Fadl M. Ba-Alwi, Kamal K. Bharadwaj

Abstract:

In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.

Keywords: Data Mining, Degree of subsumption, Freq matrix, Fuzzy hierarchy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1319
7655 Genetic Algorithms Multi-Objective Model for Project Scheduling

Authors: Elsheikh Asser

Abstract:

Time and cost are the main goals of the construction project management. The first schedule developed may not be a suitable schedule for beginning or completing the project to achieve the target completion time at a minimum total cost. In general, there are trade-offs between time and cost (TCT) to complete the activities of a project. This research presents genetic algorithms (GAs) multiobjective model for project scheduling considering different scenarios such as least cost, least time, and target time.

Keywords: Genetic algorithms, Time-cost trade-off.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2332
7654 Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

Authors: M. Analoui, M. Fadavi Amiri

Abstract:

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

Keywords: Feature reduction, genetic algorithm, pattern classification, nearest neighbor rule classifiers (k-NNR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1775
7653 Split-Pipe Design of Water Distribution Networks Using a Combination of Tabu Search and Genetic Algorithm

Authors: J. Tospornsampan, I. Kita, M. Ishii, Y. Kitamura

Abstract:

In this paper a combination approach of two heuristic-based algorithms: genetic algorithm and tabu search is proposed. It has been developed to obtain the least cost based on the split-pipe design of looped water distribution network. The proposed combination algorithm has been applied to solve the three well-known water distribution networks taken from the literature. The development of the combination of these two heuristic-based algorithms for optimization is aimed at enhancing their strengths and compensating their weaknesses. Tabu search is rather systematic and deterministic that uses adaptive memory in search process, while genetic algorithm is probabilistic and stochastic optimization technique in which the solution space is explored by generating candidate solutions. Split-pipe design may not be realistic in practice but in optimization purpose, optimal solutions are always achieved with split-pipe design. The solutions obtained in this study have proved that the least cost solutions obtained from the split-pipe design are always better than those obtained from the single pipe design. The results obtained from the combination approach show its ability and effectiveness to solve combinatorial optimization problems. The solutions obtained are very satisfactory and high quality in which the solutions of two networks are found to be the lowest-cost solutions yet presented in the literature. The concept of combination approach proposed in this study is expected to contribute some useful benefits in diverse problems.

Keywords: GAs, Heuristics, Looped network, Least-cost design, Pipe network, Optimization, TS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
7652 Verified Experiment: Intelligent Fuzzy Weighted Input Estimation Method to Inverse Heat Conduction Problem

Authors: Chen-Yu Wang, Tsung-Chien Chen, Ming-Hui Lee, Jen-Feng Huang

Abstract:

In this paper, the innovative intelligent fuzzy weighted input estimation method (FWIEM) can be applied to the inverse heat transfer conduction problem (IHCP) to estimate the unknown time-varying heat flux efficiently as presented. The feasibility of this method can be verified by adopting the temperature measurement experiment. We would like to focus attention on the heat flux estimation to three kinds of samples (Copper, Iron and Steel/AISI 304) with the same 3mm thickness. The temperature measurements are then regarded as the inputs into the FWIEM to estimate the heat flux. The experiment results show that the proposed algorithm can estimate the unknown time-varying heat flux on-line.

Keywords: Fuzzy Weighted Input Estimation Method, IHCP andHeat Flux.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
7651 P-ACO Approach to Assignment Problem in FMSs

Authors: I. Mahdavi, A. Jazayeri, M. Jahromi, R. Jafari, H. Iranmanesh

Abstract:

One of the most important problems in production planning of flexible manufacturing system (FMS) is machine tool selection and operation allocation problem that directly influences the production costs and times .In this paper minimizing machining cost, set-up cost and material handling cost as a multi-objective problem in flexible manufacturing systems environment are considered. We present a 0-1 integer linear programming model for the multiobjective machine tool selection and operation allocation problem and due to the large scale nature of the problem, solving the problem to obtain optimal solution in a reasonable time is infeasible, Paretoant colony optimization (P-ACO) approach for solving the multiobjective problem in reasonable time is developed. Experimental results indicate effectiveness of the proposed algorithm for solving the problem.

Keywords: Flexible manufacturing system, Production planning, Machine tool selection, Operation allocation, Multiobjective optimization, Metaheuristic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1914
7650 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: Big data, cooperative jamming, energy balance, physical layer, two-hop transmission, wireless security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
7649 Comparative Study on Recent Integer DCTs

Authors: Sakol Udomsiri, Masahiro Iwahashi

Abstract:

This paper presents comparative study on recent integer DCTs and a new method to construct a low sensitive structure of integer DCT for colored input signals. The method refers to sensitivity of multiplier coefficients to finite word length as an indicator of how word length truncation effects on quality of output signal. The sensitivity is also theoretically evaluated as a function of auto-correlation and covariance matrix of input signal. The structure of integer DCT algorithm is optimized by combination of lower sensitive lifting structure types of IRT. It is evaluated by the sensitivity of multiplier coefficients to finite word length expression in a function of covariance matrix of input signal. Effectiveness of the optimum combination of IRT in integer DCT algorithm is confirmed by quality improvement comparing with existing case. As a result, the optimum combination of IRT in each integer DCT algorithm evidently improves output signal quality and it is still compatible with the existing one.

Keywords: DCT, sensitivity, lossless, wordlength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
7648 Revised PLWAP Tree with Non-frequent Items for Mining Sequential Pattern

Authors: R. Vishnu Priya, A. Vadivel

Abstract:

Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.

Keywords: Sequential pattern mining, weblog, frequent and non-frequent items, incremental and interactive mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
7647 Optimal Design of Composite Patch for a Cracked Pipe by Utilizing Genetic Algorithm and Finite Element Method

Authors: Mahdi Fakoor, Seyed Mohammad Navid Ghoreishi

Abstract:

Composite patching is a common way for reinforcing the cracked pipes and cylinders. The effects of composite patch reinforcement on fracture parameters of a cracked pipe depend on a variety of parameters such as number of layers, angle, thickness, and material of each layer. Therefore, stacking sequence optimization of composite patch becomes crucial for the applications of cracked pipes. In this study, in order to obtain the optimal stacking sequence for a composite patch that has minimum weight and maximum resistance in propagation of cracks, a coupled Multi-Objective Genetic Algorithm (MOGA) and Finite Element Method (FEM) process is proposed. This optimization process has done for longitudinal and transverse semi-elliptical cracks and optimal stacking sequences and Pareto’s front for each kind of cracks are presented. The proposed algorithm is validated against collected results from the existing literature.

Keywords: Multi objective optimization, Pareto front, composite patch, cracked pipe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
7646 Self-adaptation of Ontologies to Folksonomies in Semantic Web

Authors: Francisco Echarte, José Javier Astrain, Alberto Córdoba, Jesús Villadangos

Abstract:

Ontologies and tagging systems are two different ways to organize the knowledge present in the current Web. In this paper we propose a simple method to model folksonomies, as tagging systems, with ontologies. We show the scalability of the method using real data sets. The modeling method is composed of a generic ontology that represents any folksonomy and an algorithm to transform the information contained in folksonomies to the generic ontology. The method allows representing folksonomies at any instant of time.

Keywords: Folksonomies, ontologies, OWL, semantic web.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
7645 Self-Organizing Maps in Evolutionary Approachmeant for Dimensioning Routes to the Demand

Authors: J.-C. Créput, A. Koukam, A. Hajjam

Abstract:

We present a non standard Euclidean vehicle routing problem adding a level of clustering, and we revisit the use of self-organizing maps as a tool which naturally handles such problems. We present how they can be used as a main operator into an evolutionary algorithm to address two conflicting objectives of route length and distance from customers to bus stops minimization and to deal with capacity constraints. We apply the approach to a real-life case of combined clustering and vehicle routing for the transportation of the 780 employees of an enterprise. Basing upon a geographic information system we discuss the influence of road infrastructures on the solutions generated.

Keywords: Evolutionary algorithm, self-organizing map, clustering and vehicle routing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
7644 A Subjective Scheduler Based on Backpropagation Neural Network for Formulating a Real-life Scheduling Situation

Authors: K. G. Anilkumar, T. Tanprasert

Abstract:

This paper presents a subjective job scheduler based on a 3-layer Backpropagation Neural Network (BPNN) and a greedy alignment procedure in order formulates a real-life situation. The BPNN estimates critical values of jobs based on the given subjective criteria. The scheduler is formulated in such a way that, at each time period, the most critical job is selected from the job queue and is transferred into a single machine before the next periodic job arrives. If the selected job is one of the oldest jobs in the queue and its deadline is less than that of the arrival time of the current job, then there is an update of the deadline of the job is assigned in order to prevent the critical job from its elimination. The proposed satisfiability criteria indicates that the satisfaction of the scheduler with respect to performance of the BPNN, validity of the jobs and the feasibility of the scheduler.

Keywords: Backpropagation algorithm, Critical value, Greedy alignment procedure, Neural network, Subjective criteria, Satisfiability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
7643 Real-Time 3D City Generation using Shape Grammars with LOD Variations

Authors: Pearl Goswell, Jun Jo

Abstract:

Creating3D environments, including characters and cities, is a significantly time consuming process due to a large amount of workinvolved in designing and modelling.There have been a number of attempts to automatically generate 3D objects employing shape grammars. However it is still too early to apply the mechanism to real problems such as real-time computer games.The purpose of this research is to introduce a time efficient and cost effective method to automatically generatevarious 3D objects for real-time 3D games. This Shape grammar-based real-time City Generation (RCG) model is a conceptual model for generating 3Denvironments in real-time and can be applied to 3D gamesoranimations. The RCG system can generate even a large cityby applying fundamental principles of shape grammars to building elementsin various levels of detailin real-time.

Keywords: real-time city generation, shape grammars, 3D games, 3D modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2335
7642 A Mobile Agent-based Clustering Data Fusion Algorithm in WSN

Authors: Xiangbin Zhu, Wenjuan Zhang

Abstract:

In wireless sensor networks,the mobile agent technology is used in data fusion. According to the node residual energy and the results of partial integration,we design the node clustering algorithm. Optimization of mobile agent in the routing within the cluster strategy for wireless sensor networks to further reduce the amount of data transfer. Through the experiments, using mobile agents in the integration process within the cluster can be reduced the path loss in some extent.

Keywords: wireless sensor networks, data fusion, mobile agent

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
7641 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.

Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
7640 Kinetic Rate Comparison of Methane Catalytic Combustion of Palladium Catalysts Impregnated onto γ-Alumina and Bio-Char

Authors: Noor S. Nasri, Eric C. A. Tatt, Usman D. Hamza, Jibril Mohammed, Husna M. Zain

Abstract:

Catalytic combustion of methane is imperative due to stability of methane at low temperature. Methane (CH4), therefore, remains unconverted in vehicle exhausts thereby causing greenhouse gas GHG emission problem. In this study, heterogeneous catalysts of palladium with bio-char (2 wt% Pd/Bc) and Al2O3 (2wt% Pd/ Al2O3) supports were prepared by incipient wetness impregnation and then subsequently tested for catalytic combustion of CH4. Support-porous heterogeneous catalytic combustion (HCC) material were selected based on factors such as surface area, porosity, thermal stability, thermal conductivity, reactivity with reactants or products, chemical stability, catalytic activity, and catalyst life. Sustainable and renewable support-material of bio-mass char derived from palm shell waste material was compared with those from the conventional support-porous materials. Kinetic rate of reaction was determined for combustion of methane on Palladium (Pd) based catalyst with Al2O3 support and bio-char (Bc). Material characterization was done using TGA, SEM, and BET surface area. The performance test was accomplished using tubular quartz reactor with gas mixture ratio of 3% methane and 97% air. The methane porous-HCC conversion was carried out using online gas analyzer connected to the reactor that performed porous-HCC. BET surface area for prepared 2 wt% Pd/Bc is smaller than prepared 2wt% Pd/ Al2O3 due to its low porosity between particles. The order of catalyst activity based on kinetic rate on reaction of catalysts in low temperature was 2wt% Pd/Bc>calcined 2wt% Pd/ Al2O3> 2wt% Pd/ Al2O3>calcined 2wt% Pd/Bc. Hence agro waste material can successfully be utilized as an inexpensive catalyst support material for enhanced CH4 catalytic combustion.

Keywords: Catalytic-combustion, Environmental, Support-bio-char material, Sustainable, Renewable material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6043
7639 Erosion in Abrasive Jet Nozzles: A Comprehensive Study

Authors: D. V. Sreekanth, M. Sreenivasa Rao

Abstract:

Abrasive jet machining is one of the promising non-traditional machining processes which uses mechanical energy (pressure and velocity) for machining various materials. The process parameters that influence the metal removal rate are kerfs, surface finish, depth of cut, air pressure, and distance between nozzle and work piece, nozzle diameter, abrasive type, abrasive shape, and mass flow rate of abrasive particles. The abrasive particles coming out with high pressure not only hits work surface but also passes through the nozzle resulting in erosion. This paper focuses mainly on the effect of different parameters on the erosion of nozzle in Abrasive jet machining. Three different types of nozzles made of sapphire, tungsten carbide, and high carbon high chromium steel (HCHCS) are used for machining glass and the erosion of these nozzles are calculated. The results are shown in tabular form and graphical representation.

Keywords: AJM, nozzle, sapphire, tungsten carbide, chrome steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1146
7638 Induction of Expressive Rules using the Binary Coding Method

Authors: Seyed R Mousavi

Abstract:

In most rule-induction algorithms, the only operator used against nominal attributes is the equality operator =. In this paper, we first propose the use of the inequality operator, , in addition to the equality operator, to increase the expressiveness of induced rules. Then, we present a new method, Binary Coding, which can be used along with an arbitrary rule-induction algorithm to make use of the inequality operator without any need to change the algorithm. Experimental results suggest that the Binary Coding method is promising enough for further investigation, especially in cases where the minimum number of rules is desirable.

Keywords: Data mining, Inequality operator, Number of rules, Rule-induction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264
7637 Template-Based Object Detection through Partial Shape Matching and Boundary Verification

Authors: Feng Ge, Tiecheng Liu, Song Wang, Joachim Stahl

Abstract:

This paper presents a novel template-based method to detect objects of interest from real images by shape matching. To locate a target object that has a similar shape to a given template boundary, the proposed method integrates three components: contour grouping, partial shape matching, and boundary verification. In the first component, low-level image features, including edges and corners, are grouped into a set of perceptually salient closed contours using an extended ratio-contour algorithm. In the second component, we develop a partial shape matching algorithm to identify the fractions of detected contours that partly match given template boundaries. Specifically, we represent template boundaries and detected contours using landmarks, and apply a greedy algorithm to search the matched landmark subsequences. For each matched fraction between a template and a detected contour, we estimate an affine transform that transforms the whole template into a hypothetic boundary. In the third component, we provide an efficient algorithm based on oriented edge lists to determine the target boundary from the hypothetic boundaries by checking each of them against image edges. We evaluate the proposed method on recognizing and localizing 12 template leaves in a data set of real images with clutter back-grounds, illumination variations, occlusions, and image noises. The experiments demonstrate the high performance of our proposed method1.

Keywords: Object detection, shape matching, contour grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2311
7636 Classifier Based Text Mining for Neural Network

Authors: M. Govindarajan, R. M. Chandrasekaran

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

Keywords: Back propagation, classification accuracy, textmining, time complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4222
7635 Comparison of Evolutionary Algorithms and their Hybrids Applied to MarioAI

Authors: Hidehiko Okada, Yuki Fujii

Abstract:

Researchers have been applying artificial/ computational intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI methods with respect to each game application. In thispaper, we report our experimental result on the comparison of evolution strategy, genetic algorithm and their hybrids, applied to evolving controller agents for MarioAI. GA revealed its advantage in our experiment, whereas the expected ability of ES in exploiting (fine-tuning) solutions was not clearly observed. The blend crossover operator and the mutation operator of GA might contribute well to explore the vast search space.

Keywords: Evolutionary algorithm, autonomous game controller agent, neuroevolutions, MarioAI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1727
7634 Higher Order Statistics for Identification of Minimum Phase Channels

Authors: Mohammed Zidane, Said Safi, Mohamed Sabri, Ahmed Boumezzough

Abstract:

This paper describes a blind algorithm, which is compared with two another algorithms proposed in the literature, for estimating of the minimum phase channel parameters. In order to identify blindly the impulse response of these channels, we have used Higher Order Statistics (HOS) to build our algorithm. The simulation results in noisy environment, demonstrate that the proposed method could estimate the phase and magnitude with high accuracy of these channels blindly and without any information about the input, except that the input excitation is identically and independent distribute (i.i.d) and non-Gaussian.

Keywords: System Identification, Higher Order Statistics, Communication Channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
7633 Face Detection in Color Images using Color Features of Skin

Authors: Fattah Alizadeh, Saeed Nalousi, Chiman Savari

Abstract:

Because of increasing demands for security in today-s society and also due to paying much more attention to machine vision, biometric researches, pattern recognition and data retrieval in color images, face detection has got more application. In this article we present a scientific approach for modeling human skin color, and also offer an algorithm that tries to detect faces within color images by combination of skin features and determined threshold in the model. Proposed model is based on statistical data in different color spaces. Offered algorithm, using some specified color threshold, first, divides image pixels into two groups: skin pixel group and non-skin pixel group and then based on some geometric features of face decides which area belongs to face. Two main results that we received from this research are as follow: first, proposed model can be applied easily on different databases and color spaces to establish proper threshold. Second, our algorithm can adapt itself with runtime condition and its results demonstrate desirable progress in comparison with similar cases.

Keywords: face detection, skin color modeling, color, colorfulimages, face recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2320
7632 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images

Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy

Abstract:

Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.

Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3204
7631 Power System Security Constrained Economic Dispatch Using Real Coded Quantum Inspired Evolution Algorithm

Authors: A. K. Al-Othman, F. S. Al-Fares, K. M. EL-Nagger

Abstract:

This paper presents a new optimization technique based on quantum computing principles to solve a security constrained power system economic dispatch problem (SCED). The proposed technique is a population-based algorithm, which uses some quantum computing elements in coding and evolving groups of potential solutions to reach the optimum following a partially directed random approach. The SCED problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Real Coded Quantum-Inspired Evolution Algorithm (RQIEA) is then applied to solve the constrained optimization formulation. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that RQIEA is very applicable for solving security constrained power system economic dispatch problem (SCED).

Keywords: State Estimation, Fuzzy Linear Regression, FuzzyLinear State Estimator (FLSE) and Measurements Uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
7630 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
7629 A Study of Structural Damage Detection for Spacecraft In-Orbit Based on Acoustic Sensor Array

Authors: Lei Qi, Rongxin Yan, Lichen Sun

Abstract:

With the increasing of human space activities, the number of space debris has increased dramatically, and the possibility that spacecrafts on orbit are impacted by space debris is growing. A method is of the vital significance to real-time detect and assess spacecraft damage, determine of gas leak accurately, guarantee the life safety of the astronaut effectively. In this paper, acoustic sensor array is used to detect the acoustic signal which emits from the damage of the spacecraft on orbit. Then, we apply the time difference of arrival and beam forming algorithm to locate the damage and leakage. Finally, the extent of the spacecraft damage is evaluated according to the nonlinear ultrasonic method. The result shows that this method can detect the debris impact and the structural damage, locate the damage position, and identify the damage degree effectively. This method can meet the needs of structural damage detection for the spacecraft in-orbit.

Keywords: Acoustic sensor array, spacecraft, damage assessment, leakage location.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128
7628 Evaluation of the Accuracy of Time of Arrival Source Location Algorithm of Acoustic Emission in Concrete-Mortar Structure

Authors: Hisham A. Elfergani, Ayad A. Abdalla, Ahmed R. Ballil

Abstract:

Acoustic Emission (AE) is one of the most effective non-destructive tests that can be used to detect the defect process as it is occurring. AE techniques can be used to monitor a wide range of structures and materials such as metals, non-metals and combinations of these when load is applied. The current work investigates the effectiveness and accuracy of TOA method in AE tests involving reinforced composite concrete-mortar structures. A series of experimental tests were performed using the Hsu-Neilson (H-N) source to study 2-D location accuracy using this method on concrete-mortar (400×400 mm) specimens. Four AE sensors (R3I – resonant frequency 30 kHz) were mounted to the mortar surface and six sources were performed at each point of preselected locations on the upper surface of the mortar. Results show that the TOA method can be used effectively to locate signals on composite concrete/mortar specimen and has high accuracy.

Keywords: Acoustic emission, time of arrival, composite materials, reinforced concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
7627 Self-evolving Artificial Immune System via Developing T and B Cell for Permutation Flow-shop Scheduling Problems

Authors: Pei-Chann Chang, Wei-Hsiu Huang, Ching-Jung Ting, Hwei-Wen Luo, Yu-Peng Yu

Abstract:

Artificial Immune System is applied as a Heuristic Algorithm for decades. Nevertheless, many of these applications took advantage of the benefit of this algorithm but seldom proposed approaches for enhancing the efficiency. In this paper, a Self-evolving Artificial Immune System is proposed via developing the T and B cell in Immune System and built a self-evolving mechanism for the complexities of different problems. In this research, it focuses on enhancing the efficiency of Clonal selection which is responsible for producing Affinities to resist the invading of Antigens. T and B cell are the main mechanisms for Clonal Selection to produce different combinations of Antibodies. Therefore, the development of T and B cell will influence the efficiency of Clonal Selection for searching better solution. Furthermore, for better cooperation of the two cells, a co-evolutional strategy is applied to coordinate for more effective productions of Antibodies. This work finally adopts Flow-shop scheduling instances in OR-library to validate the proposed algorithm.

Keywords: Artificial Immune System, Clonal Selection, Flow-shop Scheduling Problems, Co-evolutional strategy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752