Search results for: Deep approach metacognitive methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8444

Search results for: Deep approach metacognitive methods

8144 A Review and Comparative Analysis on Cluster Ensemble Methods

Authors: S. Sarumathi, P. Ranjetha, C. Saraswathy, M. Vaishnavi, S. Geetha

Abstract:

Clustering is an unsupervised learning technique for aggregating data objects into meaningful classes so that intra cluster similarity is maximized and inter cluster similarity is minimized in data mining. However, no single clustering algorithm proves to be the most effective in producing the best result. As a result, a new challenging technique known as the cluster ensemble approach has blossomed in order to determine the solution to this problem. For the cluster analysis issue, this new technique is a successful approach. The cluster ensemble's main goal is to combine similar clustering solutions in a way that achieves the precision while also improving the quality of individual data clustering. Because of the massive and rapid creation of new approaches in the field of data mining, the ongoing interest in inventing novel algorithms necessitates a thorough examination of current techniques and future innovation. This paper presents a comparative analysis of various cluster ensemble approaches, including their methodologies, formal working process, and standard accuracy and error rates. As a result, the society of clustering practitioners will benefit from this exploratory and clear research, which will aid in determining the most appropriate solution to the problem at hand.

Keywords: Clustering, cluster ensemble methods, consensus function, data mining, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 763
8143 The Usefulness of Logical Structure in Flexible Document Categorization

Authors: Jebari Chaker, Ounalli Habib

Abstract:

This paper presents a new approach for automatic document categorization. Exploiting the logical structure of the document, our approach assigns a HTML document to one or more categories (thesis, paper, call for papers, email, ...). Using a set of training documents, our approach generates a set of rules used to categorize new documents. The approach flexibility is carried out with rule weight association representing your importance in the discrimination between possible categories. This weight is dynamically modified at each new document categorization. The experimentation of the proposed approach provides satisfactory results.

Keywords: categorization rule, document categorization, flexible categorization, logical structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
8142 Factors Affecting Weld Line Movement in Tailor Welded Blank

Authors: Shakil A. Kagzi, Sanjay Patil, Harit K. Raval

Abstract:

Tailor Welded Blanks (TWB) are utilized in automotive industries widely because of their advantage of weight and cost reduction and maintaining required strength and structural integrity. TWB consist of two or more sheet having dissimilar or similar material and thickness; welded together to form a single sheet before forming it to desired shape. Forming of the tailor welded blank is affected by ratio of thickness of blanks, ratio of their strength, etc. mainly due to in-homogeneity of material. In the present work the relative effect of these parameters on weld line movement is studied during deep drawing of TWB using FE simulation using HYPERWORKS. The simulation is validated with results from the literature. Simulations were than performed based on Taguchi orthogonal array followed by the ANOVA analysis to determine the significance of these parameters on forming of TWB.

Keywords: ANOVA, Deep drawing, Tailor Welded Blank, TWB, Weld line movement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2762
8141 A Combined Fuzzy Decision Making Approach to Supply Chain Risk Assessment

Authors: P. Moeinzadeh, A. Hajfathaliha

Abstract:

Many firms implemented various initiatives such as outsourced manufacturing which could make a supply chain (SC) more vulnerable to various types of disruptions. So managing risk has become a critical component of SC management. Different types of SC vulnerability management methodologies have been proposed for managing SC risk, most offer only point-based solutions that deal with a limited set of risks. This research aims to reinforce SC risk management by proposing an integrated approach. SC risks are identified and a risk index classification structure is created. Then we develop a SC risk assessment approach based on the analytic network process (ANP) and the VIKOR methods under the fuzzy environment where the vagueness and subjectivity are handled with linguistic terms parameterized by triangular fuzzy numbers. By using FANP, risks weights are calculated and then inserted to the FVIKOR to rank the SC members and find the most risky partner.

Keywords: Analytic network process (ANP), Fuzzy sets, Supply chain risk management (SCRM), VIšekriterijumsko KOmpromisno Rangiranje (VIKOR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2898
8140 Fuzzy Time Series Forecasting Using Percentage Change as the Universe of Discourse

Authors: Meredith Stevenson, John E. Porter

Abstract:

Since the pioneering work of Zadeh, fuzzy set theory has been applied to a myriad of areas. Song and Chissom introduced the concept of fuzzy time series and applied some methods to the enrollments of the University of Alabama. In recent years, a number of techniques have been proposed for forecasting based on fuzzy set theory methods. These methods have either used enrollment numbers or differences of enrollments as the universe of discourse. We propose using the year to year percentage change as the universe of discourse. In this communication, the approach of Jilani, Burney, and Ardil is modified by using the year to year percentage change as the universe of discourse. We use enrollment figures for the University of Alabama to illustrate our proposed method. The proposed method results in better forecasting accuracy than existing models.

Keywords: Fuzzy forecasting, fuzzy time series, fuzzified enrollments, time-invariant model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
8139 Analyzing Defects with Failure Assessment Diagrams of Gas Pipelines

Authors: Alfred Hasanaj, Ardit Gjeta, Miranda Kullolli

Abstract:

The approach in analyzing defects on different pipe lines is conducted through Failure Assessment Diagram (FAD). These methods of analyses have further extended in recent years. This approach is used to identify and stress out a solution for the defects which randomly occur with gas pipes such are corrosion defects, gauge defects, and combination of defects where gauge and dents are included. Few of the defects are to be analyzed in this paper where our main focus will be the fracture of cast Iron pipes, elastic-plastic failure and plastic collapse of X52 steel pipes for gas transport. We need to conduct a calculation of probability of the defects in order to predict and avoid such costly defects.

Keywords: Defects, Failure Assessment Diagrams, Safety Factor Steel Pipes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2918
8138 Genetic Programming Approach for Multi-Category Pattern Classification Appliedto Network Intrusions Detection

Authors: K.M. Faraoun, A. Boukelif

Abstract:

This paper describes a new approach of classification using genetic programming. The proposed technique consists of genetically coevolving a population of non-linear transformations on the input data to be classified, and map them to a new space with a reduced dimension, in order to get a maximum inter-classes discrimination. The classification of new samples is then performed on the transformed data, and so become much easier. Contrary to the existing GP-classification techniques, the proposed one use a dynamic repartition of the transformed data in separated intervals, the efficacy of a given intervals repartition is handled by the fitness criterion, with a maximum classes discrimination. Experiments were first performed using the Fisher-s Iris dataset, and then, the KDD-99 Cup dataset was used to study the intrusion detection and classification problem. Obtained results demonstrate that the proposed genetic approach outperform the existing GP-classification methods [1],[2] and [3], and give a very accepted results compared to other existing techniques proposed in [4],[5],[6],[7] and [8].

Keywords: Genetic programming, patterns classification, intrusion detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
8137 Gate Tunnel Current Calculation for NMOSFET Based on Deep Sub-Micron Effects

Authors: Ashwani K. Rana, Narottam Chand, Vinod Kapoor

Abstract:

Aggressive scaling of MOS devices requires use of ultra-thin gate oxides to maintain a reasonable short channel effect and to take the advantage of higher density, high speed, lower cost etc. Such thin oxides give rise to high electric fields, resulting in considerable gate tunneling current through gate oxide in nano regime. Consequently, accurate analysis of gate tunneling current is very important especially in context of low power application. In this paper, a simple and efficient analytical model has been developed for channel and source/drain overlap region gate tunneling current through ultra thin gate oxide n-channel MOSFET with inevitable deep submicron effect (DSME).The results obtained have been verified with simulated and reported experimental results for the purpose of validation. It is shown that the calculated tunnel current is well fitted to the measured one over the entire oxide thickness range. The proposed model is suitable enough to be used in circuit simulator due to its simplicity. It is observed that neglecting deep sub-micron effect may lead to large error in the calculated gate tunneling current. It is found that temperature has almost negligible effect on gate tunneling current. It is also reported that gate tunneling current reduces with the increase of gate oxide thickness. The impact of source/drain overlap length is also assessed on gate tunneling current.

Keywords: Gate tunneling current, analytical model, gate dielectrics, non uniform poly gate doping, MOSFET, fringing field effect and image charges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
8136 Increased Capacity of Information Hiding in LSB-s Method for Text and Image

Authors: H.B.Kekre, Archana Athawale, Pallavi N.Halarnkar

Abstract:

Steganography, derived from Greek, literally means “covered writing". It includes a vast array of secret communications methods that conceal the message-s very existence. These methods include invisible inks, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communications. This paper proposes a new improved version of Least Significant Bit (LSB) method. The approach proposed is simple for implementation when compared to Pixel value Differencing (PVD) method and yet achieves a High embedding capacity and imperceptibility. The proposed method can also be applied to 24 bit color images and achieve embedding capacity much higher than PVD.

Keywords: Information Hiding, LSB Matching, PVD Steganography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3138
8135 Cloud Computing Support for Diagnosing Researches

Authors: A. Amirov, O. Gerget, V. Kochegurov

Abstract:

One of the main biomedical problem lies in detecting dependencies in semi structured data. Solution includes biomedical portal and algorithms (integral rating health criteria, multidimensional data visualization methods). Biomedical portal allows to process diagnostic and research data in parallel mode using Microsoft System Center 2012, Windows HPC Server cloud technologies. Service does not allow user to see internal calculations instead it provides practical interface. When data is sent for processing user may track status of task and will achieve results as soon as computation is completed. Service includes own algorithms and allows diagnosing and predicating medical cases. Approved methods are based on complex system entropy methods, algorithms for determining the energy patterns of development and trajectory models of biological systems and logical–probabilistic approach with the blurring of images.

Keywords: Biomedical portal, cloud computing, diagnostic and prognostic research, mathematical data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
8134 Mechanical Quadrature Methods and Their Extrapolations for Solving First Kind Boundary Integral Equations of Anisotropic Darcy-s Equation

Authors: Xin Luo, Jin Huang, Chuan-Long Wang

Abstract:

The mechanical quadrature methods for solving the boundary integral equations of the anisotropic Darcy-s equations with Dirichlet conditions in smooth domains are presented. By applying the collectively compact theory, we prove the convergence and stability of approximate solutions. The asymptotic expansions for the error show that the methods converge with the order O (h3), where h is the mesh size. Based on these analysis, extrapolation methods can be introduced to achieve a higher convergence rate O (h5). An a posterior asymptotic error representation is derived in order to construct self-adaptive algorithms. Finally, the numerical experiments show the efficiency of our methods.

Keywords: Darcy's equation, anisotropic, mechanical quadrature methods, extrapolation methods, a posteriori error estimate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
8133 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: Hyperspectral image, spatial hypergraph, dimensionality reduction, semantic interpretation, band selection, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1192
8132 Two Fourth-order Iterative Methods Based on Continued Fraction for Root-finding Problems

Authors: Shengfeng Li, Rujing Wang

Abstract:

In this paper, we present two new one-step iterative methods based on Thiele-s continued fraction for solving nonlinear equations. By applying the truncated Thiele-s continued fraction twice, the iterative methods are obtained respectively. Analysis of convergence shows that the new methods are fourth-order convergent. Numerical tests verifying the theory are given and based on the methods, two new one-step iterations are developed.

Keywords: Iterative method, Fixed-point iteration, Thiele's continued fraction, Order of convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
8131 Probe Selection for Pathway-Specific Microarray Probe Design Minimizing Melting Temperature Variance

Authors: Fabian Horn, Reinhard Guthke

Abstract:

In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.

Keywords: bottom-up approach, gene clusters, melting temperature, metabolic pathway, microarray probe design, probe selection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
8130 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 544
8129 Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings

Authors: Leong Lee, Cyriac Kandoth, Jennifer L. Leopold, Ronald L. Frank

Abstract:

Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].

Keywords: data mining, protein secondary structure prediction, parallelization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
8128 Combining Diverse Neural Classifiers for Complex Problem Solving: An ECOC Approach

Authors: R. Ebrahimpour, M. Abbasnezhad Arabi, H. Babamiri Moghaddam

Abstract:

Combining classifiers is a useful method for solving complex problems in machine learning. The ECOC (Error Correcting Output Codes) method has been widely used for designing combining classifiers with an emphasis on the diversity of classifiers. In this paper, in contrast to the standard ECOC approach in which individual classifiers are chosen homogeneously, classifiers are selected according to the complexity of the corresponding binary problem. We use SATIMAGE database (containing 6 classes) for our experiments. The recognition error rate in our proposed method is %10.37 which indicates a considerable improvement in comparison with the conventional ECOC and stack generalization methods.

Keywords: Error correcting output code, combining classifiers, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1377
8127 Formal Verification of a Multicast Protocol in Mobile Networks

Authors: M. Matash Borujerdi, S.M. Mirzababaei

Abstract:

As computer network technology becomes increasingly complex, it becomes necessary to place greater requirements on the validity of developing standards and the resulting technology. Communication networks are based on large amounts of protocols. The validity of these protocols have to be proved either individually or in an integral fashion. One strategy for achieving this is to apply the growing field of formal methods. Formal methods research defines systems in high order logic so that automated reasoning can be applied for verification. In this research we represent and implement a formerly announced multicast protocol in Prolog language so that certain properties of the protocol can be verified. It is shown that by using this approach some minor faults in the protocol were found and repaired. Describing the protocol as facts and rules also have other benefits i.e. leads to a process-able knowledge. This knowledge can be transferred as ontology between systems in KQML format. Since the Prolog language can increase its knowledge base every time, this method can also be used to learn an intelligent network.

Keywords: Formal methods, MobiCast, Mobile Network, Multicast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1351
8126 Non-Polynomial Spline Solution of Fourth-Order Obstacle Boundary-Value Problems

Authors: Jalil Rashidinia, Reza Jalilian

Abstract:

In this paper we use quintic non-polynomial spline functions to develop numerical methods for approximation to the solution of a system of fourth-order boundaryvalue problems associated with obstacle, unilateral and contact problems. The convergence analysis of the methods has been discussed and shown that the given approximations are better than collocation and finite difference methods. Numerical examples are presented to illustrate the applications of these methods, and to compare the computed results with other known methods.

Keywords: Quintic non-polynomial spline, Boundary formula, Convergence, Obstacle problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
8125 Advancing the Theory of Planned Behavior within Dietary and Physical Domains among Type 2 Diabetics: A Mixed Methods Approach

Authors: D.O. Omondi, M.K. Walingo, G.M. Mbagaya, L.O.A. Othuon

Abstract:

Many studies have applied the Theory of Planned Behavior (TPB) in predicting health behaviors among unique populations. However, a new paradigm is emerging where focus is now directed to modification and expansion of the TPB model rather than utilization of the traditional theory. This review proposes new models modified from the Theory of Planned Behavior and suggest an appropriate study design that can be used to test the models within physical activity and dietary practice domains among Type 2 diabetics in Kenya. The review was conducted by means of literature search in the field of nutrition behavior, health psychology and mixed methods using predetermined key words. The results identify pre-intention and post intention gaps within the TPB model that need to be filled. Additional psychosocial factors are proposed to be included in the TPB model to generate new models and the efficacy of these models tested using mixed methods design.

Keywords: Physical activity, diet, Type 2 diabetes, behaviorchange theory, model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2669
8124 Modeling Language for Machine Learning

Authors: Tsuyoshi Okita, Tatsuya Niwa

Abstract:

For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem.

Keywords: Formal language, statistical inference problem, reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
8123 Stagnation in Brownfield Redevelopment

Authors: B. Glumac, Q. Han, W. Schaefer

Abstract:

Purpose of this paper is two-folded. At first it explains the major problems that are causing stagnation in brownfield redevelopment. In addition, these problems given the context of the present multi-actor built environment are becoming more complex to observe. Therefore, this paper suggests also a prospective decisionmaking approach that is the most appropriate to observe and react on the given stagnation problems. Such an approach should be regarded as prescriptive-interactive decision-making approach, a barely established branch. This approach should offer models that have prescriptive as well as an interactive component enabling them to successfully cope with the multi-actor environment. Overall, this paper provides up-to-date insight on the brownfield stagnation by gradually introducing the nowadays major problems and offers a prospective decision-making approach how these problems could be tackled.

Keywords: BR, decision-making approach, stagnation, the Netherlands.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
8122 Discovering User Behaviour Patterns from Web Log Analysis to Enhance the Accessibility and Usability of Website

Authors: Harpreet Singh

Abstract:

Finding relevant information on the World Wide Web is becoming highly challenging day by day. Web usage mining is used for the extraction of relevant and useful knowledge, such as user behaviour patterns, from web access log records. Web access log records all the requests for individual files that the users have requested from the website. Web usage mining is important for Customer Relationship Management (CRM), as it can ensure customer satisfaction as far as the interaction between the customer and the organization is concerned. Web usage mining is helpful in improving website structure or design as per the user’s requirement by analyzing the access log file of a website through a log analyzer tool. The focus of this paper is to enhance the accessibility and usability of a guitar selling web site by analyzing their access log through Deep Log Analyzer tool. The results show that the maximum number of users is from the United States and that they use Opera 9.8 web browser and the Windows XP operating system.

Keywords: Web usage mining, log file, web mining, data mining, deep log analyser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029
8121 Interpolation of Geofield Parameters

Authors: A. Pashayev, C. Ardil, R. Sadiqov

Abstract:

Various methods of geofield parameters restoration (by algebraic polynoms; filters; rational fractions; interpolation splines; geostatistical methods – kriging; search methods of nearest points – inverse distance, minimum curvature, local – polynomial interpolation; neural networks) have been analyzed and some possible mistakes arising during geofield surface modeling have been presented.

Keywords: interpolation methods, geofield parameters, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679
8120 Comparative Safety Performance Evaluation of Profiled Deck Composite Slab from the Use of Slope-Intercept and Partial Shear Methods

Authors: Izian Abd. Karim, Kachalla Mohammed, Nora Farah A. A. Aziz, Law Teik Hua

Abstract:

The economic use and ease of construction of profiled deck composite slab is marred with the complex and un-economic strength verification required for the serviceability and general safety considerations. Beside these, albeit factors such as shear span length, deck geometries and mechanical frictions greatly influence the longitudinal shear strength, that determines the ultimate strength of profiled deck composite slab, and number of methods available for its determination; partial shear and slope-intercept are the two methods according to Euro-code 4 provision. However, the complexity associated with shear behavior of profiled deck composite slab, the use of these methods in determining the load carrying capacities of such slab yields different and conflicting values. This couple with the time and cost constraint associated with the strength verification is a source of concern that draws more attentions nowadays, the issue is critical. Treating some of these known shear strength influencing factors as random variables, the load carrying capacity violation of profiled deck composite slab from the use of the two-methods defined according to Euro-code 4 are determined using reliability approach, and comparatively studied. The study reveals safety values from the use of m-k method shows good standing compared with that from the partial shear method.

Keywords: Composite slab, first order reliability method, longitudinal shear, partial shear connection, slope-intercept.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
8119 Modified Data Mining Approach for Defective Diagnosis in Hard Disk Drive Industry

Authors: S. Soommat, S. Patamatamkul, T. Prempridi, M. Sritulyachot, P. Ineure, S. Yimman

Abstract:

Currently, slider process of Hard Disk Drive Industry become more complex, defective diagnosis for yield improvement becomes more complicated and time-consumed. Manufacturing data analysis with data mining approach is widely used for solving that problem. The existing mining approach from combining of the KMean clustering, the machine oriented Kruskal-Wallis test and the multivariate chart were applied for defective diagnosis but it is still be a semiautomatic diagnosis system. This article aims to modify an algorithm to support an automatic decision for the existing approach. Based on the research framework, the new approach can do an automatic diagnosis and help engineer to find out the defective factors faster than the existing approach about 50%.

Keywords: Slider process, Defective diagnosis and Data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171
8118 A Comparative Study of Web-pages Classification Methods using Fuzzy Operators Applied to Arabic Web-pages

Authors: Ahmad T. Al-Taani, Noor Aldeen K. Al-Awad

Abstract:

In this study, a fuzzy similarity approach for Arabic web pages classification is presented. The approach uses a fuzzy term-category relation by manipulating membership degree for the training data and the degree value for a test web page. Six measures are used and compared in this study. These measures include: Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and Bounded Difference approaches. These measures are applied and compared using 50 different Arabic web-pages. Einstein measure was gave best performance among the other measures. An analysis of these measures and concluding remarks are drawn in this study.

Keywords: Text classification, HTML, web pages, machine learning, fuzzy logic, Arabic web pages.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2204
8117 Dialogue Meetings as an Arena for Collaboration and Reflection among Researchers and Practitioners

Authors: Kerstin Grunden, Ann Svensson, Berit Forsman, Christina Karlsson, Ayman Obeid

Abstract:

The research question of the article is to explore whether the dialogue meetings method could be relevant for reflective learning among researchers and practitioners when welfare technology should be implemented in municipalities, or not. A testbed was planned to be implemented in a retirement home in a Swedish municipality, and the practitioners worked with a pre-study of that testbed. In the article, the dialogue between the researchers and the practitioners in the dialogue meetings is described and analyzed. The potential of dialogue meetings as an arena for learning and reflection among researchers and practitioners is discussed. The research methodology approach is participatory action research with mixed methods (dialogue meetings, focus groups, participant observations). The main findings from the dialogue meetings were that the researchers learned more about the use of traditional research methods, and the practitioners learned more about how they could improve their use of the methods to facilitate change processes in their organization. These findings have the potential both for the researchers and the practitioners to result in more relevant use of research methods in change processes in organizations. It is concluded that dialogue meetings could be relevant for reflective learning among researchers and practitioners when welfare technology should be implemented in a health care organization.

Keywords: Dialogue meetings, implementation, reflection, test bed, welfare technology, participatory action research.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 421
8116 Numerical Modelling of Surface Waves Generated by Low Frequency Electromagnetic Field for Silicon Refinement Process

Authors: V. Geza, J. Vencels, G. Zageris, S. Pavlovs

Abstract:

One of the most perspective methods to produce SoG-Si is refinement via metallurgical route. The most critical part of this route is refinement from boron and phosphorus. Therefore, a new approach could address this problem. We propose an approach of creating surface waves on silicon melt’s surface in order to enlarge its area and accelerate removal of boron via chemical reactions and evaporation of phosphorus. A two dimensional numerical model is created which includes coupling of electromagnetic and fluid dynamic simulations with free surface dynamics. First results show behaviour similar to experimental results from literature.

Keywords: Numerical modelling, silicon refinement, surface waves, VOF method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 776
8115 Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach

Authors: Hamid R. S. Mojaveri, Seyed S. Mousavi, Mojtaba Heydar, Ahmad Aminian

Abstract:

The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.

Keywords: Artificial Neural Networks (ANN), bullwhip effect, demand forecasting, Support Vector Machine (SVM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979