Search results for: clopen sets.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 603

Search results for: clopen sets.

153 ISC–Intelligent Subspace Clustering, A Density Based Clustering Approach for High Dimensional Dataset

Authors: Sunita Jahirabadkar, Parag Kulkarni

Abstract:

Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.

Keywords: Density based clustering, high dimensional data, subspace clustering, dynamic parameter setting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2018
152 Simultaneous Optimization of Machining Parameters and Tool Geometry Specifications in Turning Operation of AISI1045 Steel

Authors: Farhad Kolahan, Mohsen Manoochehri, Abbas Hosseini

Abstract:

Machining is an important manufacturing process used to produce a wide variety of metallic parts. Among various machining processes, turning is one of the most important one which is employed to shape cylindrical parts. In turning, the quality of finished product is measured in terms of surface roughness. In turn, surface quality is determined by machining parameters and tool geometry specifications. The main objective of this study is to simultaneously model and optimize machining parameters and tool geometry in order to improve the surface roughness for AISI1045 steel. Several levels of machining parameters and tool geometry specifications are considered as input parameters. The surface roughness is selected as process output measure of performance. A Taguchi approach is employed to gather experimental data. Then, based on signal-to-noise (S/N) ratio, the best sets of cutting parameters and tool geometry specifications have been determined. Using these parameters values, the surface roughness of AISI1045 steel parts may be minimized. Experimental results are provided to illustrate the effectiveness of the proposed approach.

Keywords: Taguchi method, turning parameters, tool geometry specifications, S/N ratio, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2325
151 Investigation of Corona wind Effect on Heat and Mass Transfer Enhancement

Authors: R.Karami, B.Kamkari, K.Kashefi

Abstract:

Applying corona wind as a novel technique can lead to a great level of heat and mass transfer augmentation by using very small amount of energy. Enhancement of forced flow evaporation rate by applying electric field (corona wind) has been experimentally evaluated in this study. Corona wind produced by a fine wire electrode which is charged with positive high DC voltage impinges to water surface and leads to evaporation enhancement by disturbing the saturated air layer over water surface. The study was focused on the effect of corona wind velocity, electrode spacing and air flow velocity on the level of evaporation enhancement. Two sets of experiments, i.e. with and without electric field, have been conducted. Data obtained from the first experiment were used as reference for evaluation of evaporation enhancement at the presence of electric field. Applied voltages ranged from corona threshold voltage to spark over voltage at 1 kV increments. The results showed that corona wind has great enhancement effect on water evaporation rate, but its effectiveness gradually diminishes by increasing air flow velocity. Maximum enhancements were 7.3 and 3.6 for air velocities of 0.125 and 1.75 m/s, respectively.

Keywords: Electrohydodynamics (EHD), corona wind, high electric field, Evaporation enhancement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
150 Using the Semantic Web in Ubiquitous and Mobile Computing: the Morfeo Experience

Authors: José M. Cantera, Miguel Jiménez, Genoveva López, Javier Soriano

Abstract:

With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called MorfeoSMC, enabling the development of mobility applications and services according to a channel model based on Services Oriented Architecture (SOA) principles. It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation of mobile Web contents. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering, as well as to exploit these semantic annotations in a novel user profile-aware content adaptation process. Semantic Web content adaptation is a way of adding value to and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).

Keywords: Semantic web, ubiquitous and mobile computing, web content transcoding, semantic markup, mobile computing middleware and services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
149 Students, Knowledge and Employability

Authors: James Moir

Abstract:

Citizens are increasingly are provided with choice and customization in public services and this has now also become a key feature of higher education in terms of policy roll-outs on personal development planning (PDP) and more generally as part of the employability agenda. The goal here is to transform people, in this case graduates, into active, responsible citizen-workers. A key part of this rhetoric and logic is the inculcation of graduate attributes within students. However, there has also been a concern with the issue of student lack of engagement and perseverance with their studies. This paper sets out to explore some of these conceptions that link graduate attributes with citizenship as well as the notion of how identity is forged through the higher education process. Examples are drawn from a quality enhancement project that is being operated within the context of the Scottish higher education system. This is further framed within the wider context of competing and conflicting demands on higher education, exacerbated by the current worldwide economic climate. There are now pressures on students to develop their employability skills as well as their capacity to engage with global issues such as behavioural change in the light of environmental concerns. It is argued that these pressures, in effect, lead to a form of personalization that is concerned with how graduates develop their sense of identity as something that is engineered and re-engineered to meet these demands.

Keywords: students, higher education, employability, knowledge, personal development

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
148 Improved Text-Independent Speaker Identification using Fused MFCC and IMFCC Feature Sets based on Gaussian Filter

Authors: Sandipan Chakroborty, Goutam Saha

Abstract:

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for speech related applications. On a recent contribution by authors, it has been shown that the Inverted Mel- Frequency Cepstral Coefficients (IMFCC) is useful feature set for SI, which contains complementary information present in high frequency region. This paper introduces the Gaussian shaped filter (GF) while calculating MFCC and IMFCC in place of typical triangular shaped bins. The objective is to introduce a higher amount of correlation between subband outputs. The performances of both MFCC & IMFCC improve with GF over conventional triangular filter (TF) based implementation, individually as well as in combination. With GMM as speaker modeling paradigm, the performances of proposed GF based MFCC and IMFCC in individual and fused mode have been verified in two standard databases YOHO, (Microphone Speech) and POLYCOST (Telephone Speech) each of which has more than 130 speakers.

Keywords: Gaussian Filter, Triangular Filter, Subbands, Correlation, MFCC, IMFCC, GMM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2450
147 Minimum-Fuel Optimal Trajectory for Reusable First-Stage Rocket Landing Using Particle Swarm Optimization

Authors: Kevin Spencer G. Anglim, Zhenyu Zhang, Qingbin Gao

Abstract:

Reusable launch vehicles (RLVs) present a more environmentally-friendly approach to accessing space when compared to traditional launch vehicles that are discarded after each flight. This paper studies the recyclable nature of RLVs by presenting a solution method for determining minimum-fuel optimal trajectories using principles from optimal control theory and particle swarm optimization (PSO). This problem is formulated as a minimum-landing error powered descent problem where it is desired to move the RLV from a fixed set of initial conditions to three different sets of terminal conditions. However, unlike other powered descent studies, this paper considers the highly nonlinear effects caused by atmospheric drag, which are often ignored for studies on the Moon or on Mars. Rather than optimizing the controls directly, the throttle control is assumed to be bang-off-bang with a predetermined thrust direction for each phase of flight. The PSO method is verified in a one-dimensional comparison study, and it is then applied to the two-dimensional cases, the results of which are illustrated.

Keywords: Minimum-fuel optimal trajectory, particle swarm optimization, reusable rocket, SpaceX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2013
146 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: Active Contour, Bayesian, Echocardiographic image, Feature vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
145 Capacity of Overloaded DS-CDMA System on Rayleigh Fading Channel with Timing Error

Authors: Preetam Kumar

Abstract:

The number of users supported in a DS-CDMA cellular system is typically less than spreading factor (N), and the system is said to be underloaded. Overloading is a technique to accommodate more number of users than the spreading factor N. In O/O overloading scheme, the first set is assigned to the N synchronous users and the second set is assigned to the additional synchronous users. An iterative multistage soft decision interference cancellation (SDIC) receiver is used to remove high level of interference between the two sets. Performance is evaluated in terms of the maximum number acceptable users so that the system performance is degraded slightly compared to the single user performance at a specified BER. In this paper, the capacity of CDMA based O/O overloading scheme is evaluated with SDIC receiver. It is observed that O/O scheme using orthogonal Gold codes provides 25% channel overloading (N=64) for synchronous DS-CDMA system on an AWGN channel in the uplink at a BER of 1e-5.For a Rayleigh faded channel, the critical capacity is 40% at a BER of 5e-5 assuming synchronous users. But in practical systems, perfect chip timing is very difficult to maintain in the uplink.. We have shown that the overloading performance reduces to 11% for a timing synchronization error of 0.02Tc for a BER of 1e-5.

Keywords: DS-CDMA, Interference Cancellation, MultiuserDetection, Orthogonal codes, Overloading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
144 Flood-Induced River Disruption: Geomorphic Imprints and Topographic Effects in Kelantan River Catchment from Kemubu to Kuala Besar, Kelantan, Malaysia

Authors: Mohamad Muqtada Ali Khan, Nor Ashikin Shaari, Donny Adriansyah Bin Nazaruddin, Hafzan Eva Bt Mansoor

Abstract:

Floods play a key role in landform evolution of an area. This process is likely to alter the topography of the earth’s surface. The present study area, Kota Bharu is very prone to floods extends from upstream of Kelantan River near Kemubu to the downstream area near Kuala Besar. These flood events which occur every year in the study area exhibit a strong bearing on river morphological set-up. In the present study, three satellite imageries of different time periods have been used to manifest the post-flood landform changes. The pre-processing of the images such as subset, geometric corrections and atmospheric corrections were carried-out using ENVI 4.5 followed by the analysis processes. Twenty sets of cross sections were plotted using software Erdas 9.2, ERDAS and ArcGis 10 for the all three images. The results show a significant change in the length of the cross section which suggest that the geomorphological processes play a key role in carving and shaping the river banks during the floods. 

Keywords: Flood Induced, Geomorphic imprints, Kelantan river, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2408
143 A Medical Images Based Retrieval System using Soft Computing Techniques

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.

Keywords: CBIR, GA, Rough sets, CBMIR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2607
142 Thermodynamic Optimization of Turboshaft Engine using Multi-Objective Genetic Algorithm

Authors: S. Farahat, E. Khorasani Nejad, S. M. Hoseini Sarvari

Abstract:

In this paper multi-objective genetic algorithms are employed for Pareto approach optimization of ideal Turboshaft engines. In the multi-objective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are specific thrust (F/m& 0), specific fuel consumption ( P S ), output shaft power 0 (& /&) shaft W m and overall efficiency( ) O η . These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters (compressor pressure ratio, turbine temperature ratio and Mach number). At the first stage single objective optimization has been investigated and the method of NSGA-II has been used for multiobjective optimization. Optimization procedures are performed for two and four objective functions and the results are compared for ideal Turboshaft engine. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of four objective optimization the results are given in tables.

Keywords: Multi-objective, Genetic algorithm, Turboshaft Engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
141 Error Correction of Radial Displacement in Grinding Machine Tool Spindle by Optimizing Shape and Bearing Tuning

Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto

Abstract:

In this article, the radial displacement error correction capability of a high precision spindle grinding caused by unbalance force was investigated. The spindle shaft is considered as a flexible rotor mounted on two sets of angular contact ball bearing. Finite element methods (FEM) have been adopted for obtaining the equation of motion of the spindle. In this paper, firstly, natural frequencies, critical frequencies, and amplitude of the unbalance response caused by residual unbalance are determined in order to investigate the spindle behaviors. Furthermore, an optimization design algorithm is employed to minimize radial displacement of the spindle which considers dimension of the spindle shaft, the dynamic characteristics of the bearings, critical frequencies and amplitude of the unbalance response, and computes optimum spindle diameters and stiffness and damping of the bearings. Numerical simulation results show that by optimizing the spindle diameters, and stiffness and damping in the bearings, radial displacement of the spindle can be reduced. A spindle about 4 μm radial displacement error can be compensated with 2 μm accuracy. This certainly can improve the accuracy of the product of machining.

Keywords: Error correction, High precision grinding, Optimization, Radial displacement, Spindle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
140 Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

Authors: Jesús Everardo Olguín Tiznado, Rafael García Martínez, Claudia Camargo Wilson, Juan Andrés López Barreras, Everardo Inzunza González, Javier Ordorica Villalvazo

Abstract:

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Keywords: RSM, dependent variable, independent variables, efficiency, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
139 Optimal Model Order Selection for Transient Error Autoregressive Moving Average (TERA) MRI Reconstruction Method

Authors: Abiodun M. Aibinu, Athaur Rahman Najeeb, Momoh J. E. Salami, Amir A. Shafie

Abstract:

An alternative approach to the use of Discrete Fourier Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction is the use of parametric modeling technique. This method is suitable for problems in which the image can be modeled by explicit known source functions with a few adjustable parameters. Despite the success reported in the use of modeling technique as an alternative MRI reconstruction technique, two important problems constitutes challenges to the applicability of this method, these are estimation of Model order and model coefficient determination. In this paper, five of the suggested method of evaluating the model order have been evaluated, these are: The Final Prediction Error (FPE), Akaike Information Criterion (AIC), Residual Variance (RV), Minimum Description Length (MDL) and Hannan and Quinn (HNQ) criterion. These criteria were evaluated on MRI data sets based on the method of Transient Error Reconstruction Algorithm (TERA). The result for each criterion is compared to result obtained by the use of a fixed order technique and three measures of similarity were evaluated. Result obtained shows that the use of MDL gives the highest measure of similarity to that use by a fixed order technique.

Keywords: Autoregressive Moving Average (ARMA), MagneticResonance Imaging (MRI), Parametric modeling, Transient Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
138 Mathematical Model for Dengue Disease with Maternal Antibodies

Authors: Rujira Kongnuy, Puntani Pongsumpun, I-Ming Tang

Abstract:

Mathematical models can be used to describe the dynamics of the spread of infectious disease between susceptibles and infectious populations. Dengue fever is a re-emerging disease in the tropical and subtropical regions of the world. Its incidence has increased fourfold since 1970 and outbreaks are now reported quite frequently from many parts of the world. In dengue endemic regions, more cases of dengue infection in pregnancy and infancy are being found due to the increasing incidence. It has been reported that dengue infection was vertically transmitted to the infants. Primary dengue infection is associated with mild to high fever, headache, muscle pain and skin rash. Immune response includes IgM antibodies produced by the 5th day of symptoms and persist for 30-60 days. IgG antibodies appear on the 14th day and persist for life. Secondary infections often result in high fever and in many cases with hemorrhagic events and circulatory failure. In the present paper, a mathematical model is proposed to simulate the succession of dengue disease transmission in pregnancy and infancy. Stability analysis of the equilibrium points is carried out and a simulation is given for the different sets of parameter. Moreover, the bifurcation diagrams of our model are discussed. The controlling of this disease in infant cases is introduced in the term of the threshold condition.

Keywords: Dengue infection, equilibrium states, maternalantibodies, pregnancy and infancy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022
137 Detecting Email Forgery using Random Forests and Naïve Bayes Classifiers

Authors: Emad E Abdallah, A.F. Otoom, ArwaSaqer, Ola Abu-Aisheh, Diana Omari, Ghadeer Salem

Abstract:

As emails communications have no consistent authentication procedure to ensure the authenticity, we present an investigation analysis approach for detecting forged emails based on Random Forests and Naïve Bays classifiers. Instead of investigating the email headers, we use the body content to extract a unique writing style for all the possible suspects. Our approach consists of four main steps: (1) The cybercrime investigator extract different effective features including structural, lexical, linguistic, and syntactic evidence from previous emails for all the possible suspects, (2) The extracted features vectors are normalized to increase the accuracy rate. (3) The normalized features are then used to train the learning engine, (4) upon receiving the anonymous email (M); we apply the feature extraction process to produce a feature vector. Finally, using the machine learning classifiers the email is assigned to one of the suspects- whose writing style closely matches M. Experimental results on real data sets show the improved performance of the proposed method and the ability of identifying the authors with a very limited number of features.

Keywords: Digital investigation, cybercrimes, emails forensics, anonymous emails, writing style, and authorship analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5254
136 Using the Combined Model of PROMETHEE and Fuzzy Analytic Network Process for Determining Question Weights in Scientific Exams through Data Mining Approach

Authors: Hassan Haleh, Amin Ghaffari, Parisa Farahpour

Abstract:

Need for an appropriate system of evaluating students- educational developments is a key problem to achieve the predefined educational goals. Intensity of the related papers in the last years; that tries to proof or disproof the necessity and adequacy of the students assessment; is the corroborator of this matter. Some of these studies tried to increase the precision of determining question weights in scientific examinations. But in all of them there has been an attempt to adjust the initial question weights while the accuracy and precision of those initial question weights are still under question. Thus In order to increase the precision of the assessment process of students- educational development, the present study tries to propose a new method for determining the initial question weights by considering the factors of questions like: difficulty, importance and complexity; and implementing a combined method of PROMETHEE and fuzzy analytic network process using a data mining approach to improve the model-s inputs. The result of the implemented case study proves the development of performance and precision of the proposed model.

Keywords: Assessing students, Analytic network process, Clustering, Data mining, Fuzzy sets, Multi-criteria decision making, and Preference function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
135 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

Authors: U. Kalay, O. Kalıpsız

Abstract:

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Keywords: Buffer Management, Spatiotemporal databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476
134 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
133 Intelligent Neural Network Based STLF

Authors: H. Shayeghi, H. A. Shayanfar, G. Azimi

Abstract:

Short-Term Load Forecasting (STLF) plays an important role for the economic and secure operation of power systems. In this paper, Continuous Genetic Algorithm (CGA) is employed to evolve the optimum large neural networks structure and connecting weights for one-day ahead electric load forecasting problem. This study describes the process of developing three layer feed-forward large neural networks for load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. We find good performance for the large neural networks. The proposed methodology gives lower percent errors all the time. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.

Keywords: Feed-forward Large Neural Network, Short-TermLoad Forecasting, Continuous Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
132 Topographic Arrangement of 3D Design Components on 2D Maps by Unsupervised Feature Extraction

Authors: Stefan Menzel

Abstract:

As a result of the daily workflow in the design development departments of companies, databases containing huge numbers of 3D geometric models are generated. According to the given problem engineers create CAD drawings based on their design ideas and evaluate the performance of the resulting design, e.g. by computational simulations. Usually, new geometries are built either by utilizing and modifying sets of existing components or by adding single newly designed parts to a more complex design. The present paper addresses the two facets of acquiring components from large design databases automatically and providing a reasonable overview of the parts to the engineer. A unified framework based on the topographic non-negative matrix factorization (TNMF) is proposed which solves both aspects simultaneously. First, on a given database meaningful components are extracted into a parts-based representation in an unsupervised manner. Second, the extracted components are organized and visualized on square-lattice 2D maps. It is shown on the example of turbine-like geometries that these maps efficiently provide a wellstructured overview on the database content and, at the same time, define a measure for spatial similarity allowing an easy access and reuse of components in the process of design development.

Keywords: Design decomposition, topographic non-negative matrix factorization, parts-based representation, self-organization, unsupervised feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
131 Modeling of Electrokinetic Mixing in Lab on Chip Microfluidic Devices

Authors: Virendra J. Majarikar, Harikrishnan N. Unni

Abstract:

This paper sets to demonstrate a modeling of electrokinetic mixing employing electroosmotic stationary and time-dependent microchannel using alternate zeta patches on the lower surface of the micromixer in a lab on chip microfluidic device. Electroosmotic flow is amplified using different 2D and 3D model designs with alternate and geometric zeta potential values such as 25, 50, and 100 mV, respectively, to achieve high concentration mixing in the electrokinetically-driven microfluidic system. The enhancement of electrokinetic mixing is studied using Finite Element Modeling, and simulation workflow is accomplished with defined integral steps. It can be observed that the presence of alternate zeta patches can help inducing microvortex flows inside the channel, which in turn can improve mixing efficiency. Fluid flow and concentration fields are simulated by solving Navier-Stokes equation (implying Helmholtz-Smoluchowski slip velocity boundary condition) and Convection-Diffusion equation. The effect of the magnitude of zeta potential, the number of alternate zeta patches, etc. are analysed thoroughly. 2D simulation reveals that there is a cumulative increase in concentration mixing, whereas 3D simulation differs slightly with low zeta potential as that of the 2D model within the T-shaped micromixer for concentration 1 mol/m3 and 0 mol/m3, respectively. Moreover, 2D model results were compared with those of 3D to indicate the importance of the 3D model in a microfluidic design process.

Keywords: COMSOL, electrokinetic, electroosmotic, microfluidics, zeta potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1209
130 Effects of Hidden Unit Sizes and Autoregressive Features in Mental Task Classification

Authors: Ramaswamy Palaniappan, Nai-Jen Huan

Abstract:

Classification of electroencephalogram (EEG) signals extracted during mental tasks is a technique that is actively pursued for Brain Computer Interfaces (BCI) designs. In this paper, we compared the classification performances of univariateautoregressive (AR) and multivariate autoregressive (MAR) models for representing EEG signals that were extracted during different mental tasks. Multilayer Perceptron (MLP) neural network (NN) trained by the backpropagation (BP) algorithm was used to classify these features into the different categories representing the mental tasks. Classification performances were also compared across different mental task combinations and 2 sets of hidden units (HU): 2 to 10 HU in steps of 2 and 20 to 100 HU in steps of 20. Five different mental tasks from 4 subjects were used in the experimental study and combinations of 2 different mental tasks were studied for each subject. Three different feature extraction methods with 6th order were used to extract features from these EEG signals: AR coefficients computed with Burg-s algorithm (ARBG), AR coefficients computed with stepwise least square algorithm (ARLS) and MAR coefficients computed with stepwise least square algorithm. The best results were obtained with 20 to 100 HU using ARBG. It is concluded that i) it is important to choose the suitable mental tasks for different individuals for a successful BCI design, ii) higher HU are more suitable and iii) ARBG is the most suitable feature extraction method.

Keywords: Autoregressive, Brain-Computer Interface, Electroencephalogram, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
129 Mining Genes Relations in Microarray Data Combined with Ontology in Colon Cancer Automated Diagnosis System

Authors: A. Gruzdz, A. Ihnatowicz, J. Siddiqi, B. Akhgar

Abstract:

MATCH project [1] entitle the development of an automatic diagnosis system that aims to support treatment of colon cancer diseases by discovering mutations that occurs to tumour suppressor genes (TSGs) and contributes to the development of cancerous tumours. The constitution of the system is based on a) colon cancer clinical data and b) biological information that will be derived by data mining techniques from genomic and proteomic sources The core mining module will consist of the popular, well tested hybrid feature extraction methods, and new combined algorithms, designed especially for the project. Elements of rough sets, evolutionary computing, cluster analysis, self-organization maps and association rules will be used to discover the annotations between genes, and their influence on tumours [2]-[11]. The methods used to process the data have to address their high complexity, potential inconsistency and problems of dealing with the missing values. They must integrate all the useful information necessary to solve the expert's question. For this purpose, the system has to learn from data, or be able to interactively specify by a domain specialist, the part of the knowledge structure it needs to answer a given query. The program should also take into account the importance/rank of the particular parts of data it analyses, and adjusts the used algorithms accordingly.

Keywords: Bioinformatics, gene expression, ontology, selforganizingmaps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
128 Rotation Invariant Face Recognition Based on Hybrid LPT/DCT Features

Authors: Rehab F. Abdel-Kader, Rabab M. Ramadan, Rawya Y. Rizk

Abstract:

The recognition of human faces, especially those with different orientations is a challenging and important problem in image analysis and classification. This paper proposes an effective scheme for rotation invariant face recognition using Log-Polar Transform and Discrete Cosine Transform combined features. The rotation invariant feature extraction for a given face image involves applying the logpolar transform to eliminate the rotation effect and to produce a row shifted log-polar image. The discrete cosine transform is then applied to eliminate the row shift effect and to generate the low-dimensional feature vector. A PSO-based feature selection algorithm is utilized to search the feature vector space for the optimal feature subset. Evolution is driven by a fitness function defined in terms of maximizing the between-class separation (scatter index). Experimental results, based on the ORL face database using testing data sets for images with different orientations; show that the proposed system outperforms other face recognition methods. The overall recognition rate for the rotated test images being 97%, demonstrating that the extracted feature vector is an effective rotation invariant feature set with minimal set of selected features.

Keywords: Discrete Cosine Transform, Face Recognition, Feature Extraction, Log Polar Transform, Particle SwarmOptimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
127 Application of a Similarity Measure for Graphs to Web-based Document Structures

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian, Max Mühlhauser

Abstract:

Due to the tremendous amount of information provided by the World Wide Web (WWW) developing methods for mining the structure of web-based documents is of considerable interest. In this paper we present a similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as linear integer strings, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments for solving a novel and challenging problem: Measuring the structural similarity of generalized trees. In other words: We first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem for developing a efficient graph similarity measure. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based document structures.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1892
126 Ranking of Inventory Policies Using Distance Based Approach Method

Authors: Gupta Amit, Kumar Ramesh, Tewari P. C.

Abstract:

Globalization is putting enormous pressure on the business organizations specially manufacturing one to rethink the supply chain in innovative manners. Inventory consumes major portion of total sale revenue. Effective and efficient inventory management plays a vital role for the successful functioning of any organization. Selection of inventory policy is one of the important purchasing activities. This paper focuses on selection and ranking of alternative inventory policies. A deterministic quantitative model based on Distance Based Approach (DBA) method has been developed for evaluation and ranking of inventory policies. We have employed this concept first time for this type of the selection problem. Four inventory policies economic order quantity (EOQ), just in time (JIT), vendor managed inventory (VMI) and monthly policy are considered. Improper selection could affect a company’s competitiveness in terms of the productivity of its facilities and quality of its products. The ranking of inventory policies is a multi-criteria problem. There is a need to first identify the selection criteria and then processes the information with reference to relative importance of attributes for comparison. Criteria values for each inventory policy can be obtained either analytically or by using a simulation technique or they are linguistic subjective judgments defined by fuzzy sets, like, for example, the values of criteria. A methodology is developed and applied to rank the inventory policies.

Keywords: Inventory Policy, Ranking, DBA, Selection criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
125 A Codebook-based Redundancy Suppression Mechanism with Lifetime Prediction in Cluster-based WSN

Authors: Huan Chen, Bo-Chao Cheng, Chih-Chuan Cheng, Yi-Geng Chen, Yu Ling Chou

Abstract:

Wireless Sensor Network (WSN) comprises of sensor nodes which are designed to sense the environment, transmit sensed data back to the base station via multi-hop routing to reconstruct physical phenomena. Since physical phenomena exists significant overlaps between temporal redundancy and spatial redundancy, it is necessary to use Redundancy Suppression Algorithms (RSA) for sensor node to lower energy consumption by reducing the transmission of redundancy. A conventional algorithm of RSAs is threshold-based RSA, which sets threshold to suppress redundant data. Although many temporal and spatial RSAs are proposed, temporal-spatial RSA are seldom to be proposed because it is difficult to determine when to utilize temporal or spatial RSAs. In this paper, we proposed a novel temporal-spatial redundancy suppression algorithm, Codebookbase Redundancy Suppression Mechanism (CRSM). CRSM adopts vector quantization to generate a codebook, which is easily used to implement temporal-spatial RSA. CRSM not only achieves power saving and reliability for WSN, but also provides the predictability of network lifetime. Simulation result shows that the network lifetime of CRSM outperforms at least 23% of that of other RSAs.

Keywords: Redundancy Suppression Algorithm (RSA), Threshold-based RSA, Temporal RSA, Spatial RSA and Codebookbase Redundancy Suppression Mechanism (CRSM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
124 Integration of Seismic and Seismological Data Interpretation for Subsurface Structure Identification

Authors: Iftikhar Ahmed Satti, Wan Ismail Wan Yusoff

Abstract:

The structural interpretation of a part of eastern Potwar (Missa Keswal) has been carried out with available seismological, seismic and well data. Seismological data contains both the source parameters and fault plane solution (FPS) parameters and seismic data contains ten seismic lines that were re-interpreted by using well data. Structural interpretation depicts two broad types of fault sets namely, thrust and back thrust faults. These faults together give rise to pop up structures in the study area and also responsible for many structural traps and seismicity. Seismic interpretation includes time and depth contour maps of Chorgali Formation while seismological interpretation includes focal mechanism solution (FMS), depth, frequency, magnitude bar graphs and renewal of Seismotectonic map. The Focal Mechanism Solutions (FMS) that surrounds the study area are correlated with the different geological and structural maps of the area for the determination of the nature of subsurface faults. Results of structural interpretation from both seismic and seismological data show good correlation. It is hoped that the present work will help in better understanding of the variations in the subsurface structure and can be a useful tool for earthquake prediction, planning of oil field and reservoir monitoring.

Keywords: Focal mechanism solution (FMS), Fault plane solution (FPS), Reservoir monitoring, earthquake prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2481