Search results for: Process mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5844

Search results for: Process mining

5484 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy

Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie

Abstract:

In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.

Keywords: Data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2496
5483 A Genetic Algorithm for Clustering on Image Data

Authors: Qin Ding, Jim Gasvoda

Abstract:

Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.

Keywords: Clustering, data mining, genetic algorithm, image data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
5482 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: Data fusion, Dempster-Shafer theory, data mining, event detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
5481 Using Ontology Search in the Design of Class Diagram from Business Process Model

Authors: Wararat Rungworawut, Twittie Senivongse

Abstract:

Business process model describes process flow of a business and can be seen as the requirement for developing a software application. This paper discusses a BPM2CD guideline which complements the Model Driven Architecture concept by suggesting how to create a platform-independent software model in the form of a UML class diagram from a business process model. An important step is the identification of UML classes from the business process model. A technique for object-oriented analysis called domain analysis is borrowed and key concepts in the business process model will be discovered and proposed as candidate classes for the class diagram. The paper enhances this step by using ontology search to help identify important classes for the business domain. As ontology is a source of knowledge for a particular domain which itself can link to ontologies of related domains, the search can give a refined set of candidate classes for the resulting class diagram.

Keywords: Business Process Model, Model DrivenArchitecture, Ontology, UML Class Diagram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2433
5480 Clinical Decision Support for Disease Classification based on the Tests Association

Authors: Sung Ho Ha, Seong Hyeon Joo, Eun Kyung Kwon

Abstract:

Until recently, researchers have developed various tools and methodologies for effective clinical decision-making. Among those decisions, chest pain diseases have been one of important diagnostic issues especially in an emergency department. To improve the ability of physicians in diagnosis, many researchers have developed diagnosis intelligence by using machine learning and data mining. However, most of the conventional methodologies have been generally based on a single classifier for disease classification and prediction, which shows moderate performance. This study utilizes an ensemble strategy to combine multiple different classifiers to help physicians diagnose chest pain diseases more accurately than ever. Specifically the ensemble strategy is applied by using the integration of decision trees, neural networks, and support vector machines. The ensemble models are applied to real-world emergency data. This study shows that the performance of the ensemble models is superior to each of single classifiers.

Keywords: Diagnosis intelligence, ensemble approach, data mining, emergency department

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
5479 Application of Feed Forward Neural Networks in Modeling and Control of a Fed-Batch Crystallization Process

Authors: Petia Georgieva, Sebastião Feyo de Azevedo

Abstract:

This paper is focused on issues of nonlinear dynamic process modeling and model-based predictive control of a fed-batch sugar crystallization process applying the concept of artificial neural networks as computational tools. The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. A feed forward neural network (FFNN) model of the process is first built as part of the controller structure to predict the process response over a specified (prediction) horizon. The predictions are supplied to an optimization procedure to determine the values of the control action over a specified (control) horizon that minimizes a predefined performance index. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. However, the simulation results demonstrated smooth behavior of the control actions and satisfactory reference tracking.

Keywords: Feed forward neural network, process modelling, model predictive control, crystallization process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
5478 A New Framework to Model a Secure E-Commerce System

Authors: A. Youseef, F. Liu

Abstract:

The existing information system (IS) developments methods are not met the requirements to resolve the security related IS problems and they fail to provide a successful integration of security and systems engineering during all development process stages. Hence, the security should be considered during the whole software development process and identified with the requirements specification. This paper aims to propose an integrated security and IS engineering approach in all software development process stages by using i* language. This proposed framework categorizes into three separate parts: modelling business environment part, modelling information technology system part and modelling IS security part. The results show that considering security IS goals in the whole system development process can have a positive influence on system implementation and better meet business expectations.

Keywords: Business Process Modelling (BPM), Information System Security, Software Development Process, Requirement Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995
5477 Carrying Out the Steps of Decision Making Process in Concrete Organization

Authors: Eva Štěpánková

Abstract:

The decision-making process is theoretically clearly defined. Generally, it includes the problem identification and analysis, data gathering, goals and criteria setting, alternatives development and optimal alternative choice and its implementation. In practice however, various modifications of the theoretical decision-making process can occur. The managers can consider some of the phases to be too complicated or unfeasible and thus they do not carry them out and conversely some of the steps can be overestimated. The aim of the paper is to reveal and characterize the perception of the individual phases of decision-making process by the managers. The research is concerned with managers in the military environment – commanders. Quantitative survey is focused cross-sectionally in the individual levels of management of the Ministry of Defence of the Czech Republic. On the total number of 135 respondents the analysis focuses on which of the decision-making process phases are problematic or not carried out in practice and which are again perceived to be the easiest. Then it is examined the reasons of the findings.

Keywords: Decision making, decision making process, decision problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926
5476 Parametric Studies of Ethylene Dichloride Purification Process

Authors: Sh. Arzani, H. Kazemi Esfeh, Y. Galeh Zadeh, V. Akbari

Abstract:

Ethylene dichloride is a colorless liquid with a smell like chloroform. EDC is classified in the simple hydrocarbon group which is obtained from chlorinating ethylene gas. Its chemical formula is C2H2Cl2 which is used as the main mediator in VCM production. Therefore, the purification process of EDC is important in the petrochemical process. In this study, the purification unit of EDC was simulated, and then validation was performed. Finally, the impact of process parameter was studied for the degree of EDC purity. The results showed that by increasing the feed flow, the reflux impure combinations increase and result in an EDC purity decrease.

Keywords: Ethylene dichloride, purification, EDC, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
5475 Effect of Impurities in the Chlorination Process of TiO2

Authors: Seok Hong Min, Tae Kwon Ha

Abstract:

With the increasing interest on Ti alloys, the extraction process of Ti from its typical ore, TiO2, has long been and will be important issue. As an intermediate product for the production of pigment or titanium metal sponge, tetrachloride (TiCl4) is produced by fluidized bed using high TiO2 feedstock. The purity of TiCl4 after chlorination is subjected to the quality of the titanium feedstock. Since the impurities in the TiCl4 product are reported to final products, the purification process of the crude TiCl4 is required. The purification process includes fractional distillation and chemical treatment, which depends on the nature of the impurities present and the required quality of the final product. In this study, thermodynamic analysis on the impurity effect in the chlorination process, which is the first step of extraction of Ti from TiO2, has been conducted. All thermodynamic calculations were performed using the FactSage thermodynamical software.

Keywords: Rutile, titanium, chlorination process, impurities, thermodynamic calculation, FactSage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
5474 Granularity Analysis for Spatio-Temporal Web Sensors

Authors: Shun Hattori

Abstract:

In recent years, many researches to mine the exploding Web world, especially User Generated Content (UGC) such as weblogs, for knowledge about various phenomena and events in the physical world have been done actively, and also Web services with the Web-mined knowledge have begun to be developed for the public. However, there are few detailed investigations on how accurately Web-mined data reflect physical-world data. It must be problematic to idolatrously utilize the Web-mined data in public Web services without ensuring their accuracy sufficiently. Therefore, this paper introduces the simplest Web Sensor and spatiotemporallynormalized Web Sensor to extract spatiotemporal data about a target phenomenon from weblogs searched by keyword(s) representing the target phenomenon, and tries to validate the potential and reliability of the Web-sensed spatiotemporal data by four kinds of granularity analyses of coefficient correlation with temperature, rainfall, snowfall, and earthquake statistics per day by region of Japan Meteorological Agency as physical-world data: spatial granularity (region-s population density), temporal granularity (time period, e.g., per day vs. per week), representation granularity (e.g., “rain" vs. “heavy rain"), and media granularity (weblogs vs. microblogs such as Tweets).

Keywords: Granularity analysis, knowledge extraction, spatiotemporal data mining, Web credibility, Web mining, Web sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
5473 A Comprehensive Review on Different Mixed Data Clustering Ensemble Methods

Authors: S. Sarumathi, N. Shanthi, S. Vidhya, M. Sharmila

Abstract:

An extensive amount of work has been done in data clustering research under the unsupervised learning technique in Data Mining during the past two decades. Moreover, several approaches and methods have been emerged focusing on clustering diverse data types, features of cluster models and similarity rates of clusters. However, none of the single clustering algorithm exemplifies its best nature in extracting efficient clusters. Consequently, in order to rectify this issue, a new challenging technique called Cluster Ensemble method was bloomed. This new approach tends to be the alternative method for the cluster analysis problem. The main objective of the Cluster Ensemble is to aggregate the diverse clustering solutions in such a way to attain accuracy and also to improve the eminence the individual clustering algorithms. Due to the massive and rapid development of new methods in the globe of data mining, it is highly mandatory to scrutinize a vital analysis of existing techniques and the future novelty. This paper shows the comparative analysis of different cluster ensemble methods along with their methodologies and salient features. Henceforth this unambiguous analysis will be very useful for the society of clustering experts and also helps in deciding the most appropriate one to resolve the problem in hand.

Keywords: Clustering, Cluster Ensemble Methods, Coassociation matrix, Consensus Function, Median Partition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
5472 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model

Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl

Abstract:

Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the workpiece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.

Keywords: Dexel, process stability, material removal, milling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
5471 New Product Development Process on High-Tech Innovation Life Cycle

Authors: Gonçalo G. Aleixo, Alexandra B. Tenera

Abstract:

This work will provide a new perspective of exploring innovation thematic. It will reveal that radical and incremental innovations are complementary during the innovation life cycle and accomplished through distinct ways of developing new products. Each new product development process will be constructed according to the nature of each innovation and the state of the product development. This paper proposes the inclusion of the organizational function areas that influence new product's development on the new product development process.

Keywords: Cross-functional, Incremental Innovation, New Product development Process, Radical Innovation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3802
5470 Phenomenological and Theoretical Analysis of Relativistic Temperature Transformation and Relativistic Entropy

Authors: Marko Popovic

Abstract:

There are three possible effects of Special Theory of Relativity (STR) on a thermodynamic system. Planck and Einstein looked upon this process as isobaric; on the other hand Ott saw it as an adiabatic process. However plenty of logical reasons show that the process is isotherm. Our phenomenological consideration demonstrates that the temperature is invariant with Lorenz transformation. In that case process is isotherm, so volume and pressure are Lorentz covariant. If the process is isotherm the Boyles law is Lorentz invariant. Also equilibrium constant and Gibbs energy, activation energy, enthalpy entropy and extent of the reaction became Lorentz invariant.

Keywords: STR, relativistic temperature transformation, Boyle'slaw, equilibrium constant, Gibbs energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2251
5469 SUPAR: System for User-Centric Profiling of Association Rules in Streaming Data

Authors: Sarabjeet Kaur Kochhar

Abstract:

With a surge of stream processing applications novel techniques are required for generation and analysis of association rules in streams. The traditional rule mining solutions cannot handle streams because they generally require multiple passes over the data and do not guarantee the results in a predictable, small time. Though researchers have been proposing algorithms for generation of rules from streams, there has not been much focus on their analysis. We propose Association rule profiling, a user centric process for analyzing association rules and attaching suitable profiles to them depending on their changing frequency behavior over a previous snapshot of time in a data stream. Association rule profiles provide insights into the changing nature of associations and can be used to characterize the associations. We discuss importance of characteristics such as predictability of linkages present in the data and propose metric to quantify it. We also show how association rule profiles can aid in generation of user specific, more understandable and actionable rules. The framework is implemented as SUPAR: System for Usercentric Profiling of Association Rules in streaming data. The proposed system offers following capabilities: i) Continuous monitoring of frequency of streaming item-sets and detection of significant changes therein for association rule profiling. ii) Computation of metrics for quantifying predictability of associations present in the data. iii) User-centric control of the characterization process: user can control the framework through a) constraint specification and b) non-interesting rule elimination.

Keywords: Data Streams, User subjectivity, Change detection, Association rule profiles, Predictability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419
5468 The Use of Process-Oriented Methods of Calculation to Determine the Costs of Logistics Processes

Authors: Tomas Cechura, Michal Simon

Abstract:

The aim of this paper is to create a proposal for determining the costs of logistics processes by using process-oriented calculation methods. The traditional approach is that logistics costs are part of manufacturing overhead which is usually calculated as a percentage surcharge. Therefore in the traditional approach it is not obvious where and in which activities costs were incurred. So it is impossible to trace logistics costs to products. Our point of view is trying to fix or at least improve this issue. Another benefit of applying the process approach is identification of logistics processes which are otherwise part of manufacturing overhead. In the first part this paper describes the development of process-oriented methods over time. The next part shows the possibility of implementing the process-oriented method called Prozesskostenrechnung to logistics processes. The conclusion summarizes advantages and disadvantages of using this method in logistics.

Keywords: Cost, logistics, calculation, process-oriented method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
5467 A Social Decision Support Mechanism for Group Purchasing

Authors: Lien-Fa Lin, Yung-Ming Li, Fu-Shun Hsieh

Abstract:

With the advancement of information technology and development of group commerce, people have obviously changed in their lifestyle. However, group commerce faces some challenging problems. The products or services provided by vendors do not satisfactorily reflect customers’ opinions, so that the sale and revenue of group commerce gradually become lower. On the other hand, the process for a formed customer group to reach group-purchasing consensus is time-consuming and the final decision is not the best choice for each group members. In this paper, we design a social decision support mechanism, by using group discussion message to recommend suitable options for group members and we consider social influence and personal preference to generate option ranking list. The proposed mechanism can enhance the group purchasing decision making efficiently and effectively and venders can provide group products or services according to the group option ranking list.

Keywords: Social network, group decision, text mining, group commerce.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
5466 Modelling for Roof Failure Analysis in an Underground Cave

Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández

Abstract:

Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.

Keywords: Forensic analysis, hypothesis modelling, roof failure, seismic monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565
5465 Prediction of Solidification Behavior of Al Alloy in a Cube Mold Cavity

Authors: N. P. Yadav, Deepti Verma

Abstract:

This paper focuses on the mathematical modeling for solidification of Al alloy in a cube mold cavity to study the solidification behavior of casting process. The parametric investigation of solidification process inside the cavity was performed by using computational solidification/melting model coupled with Volume of fluid (VOF) model. The implicit filling algorithm is used in this study to understand the overall process from the filling stage to solidification in a model metal casting process. The model is validated with past studied at same conditions. The solidification process is analyzed by including the effect of pouring velocity as well as natural convection from the wall and geometry of the cavity. These studies show the possibility of various defects during solidification process.

Keywords: Buoyancy driven flow, natural convection driven flow, residual flow, secondary flow, volume of fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2271
5464 Cluster Algorithm for Genetic Diversity

Authors: Manpreet Singh, Keerat Kaur, Bhavdeep Singh

Abstract:

With the hardware technology advancing, the cost of storing is decreasing. Thus there is an urgent need for new techniques and tools that can intelligently and automatically assist us in transferring this data into useful knowledge. Different techniques of data mining are developed which are helpful for handling these large size databases [7]. Data mining is also finding its role in the field of biotechnology. Pedigree means the associated ancestry of a crop variety. Genetic diversity is the variation in the genetic composition of individuals within or among species. Genetic diversity depends upon the pedigree information of the varieties. Parents at lower hierarchic levels have more weightage for predicting genetic diversity as compared to the upper hierarchic levels. The weightage decreases as the level increases. For crossbreeding, the two varieties should be more and more genetically diverse so as to incorporate the useful characters of the two varieties in the newly developed variety. This paper discusses the searching and analyzing of different possible pairs of varieties selected on the basis of morphological characters, Climatic conditions and Nutrients so as to obtain the most optimal pair that can produce the required crossbreed variety. An algorithm was developed to determine the genetic diversity between the selected wheat varieties. Cluster analysis technique is used for retrieving the results.

Keywords: Genetic diversity, pedigree, nutrients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1766
5463 Correlation-based Feature Selection using Ant Colony Optimization

Authors: M. Sadeghzadeh, M. Teshnehlab

Abstract:

Feature selection has recently been the subject of intensive research in data mining, specially for datasets with a large number of attributes. Recent work has shown that feature selection can have a positive effect on the performance of machine learning algorithms. The success of many learning algorithms in their attempts to construct models of data, hinges on the reliable identification of a small set of highly predictive attributes. The inclusion of irrelevant, redundant and noisy attributes in the model building process phase can result in poor predictive performance and increased computation. In this paper, a novel feature search procedure that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It looks for optimal solutions by considering both local heuristics and previous knowledge. When applied to two different classification problems, the proposed algorithm achieved very promising results.

Keywords: Ant colony optimization, Classification, Datamining, Feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2381
5462 Increasing the Capacity of Plant Bottlenecks by Using of Improving the Ratio of Mean Time between Failures to Mean Time to Repair

Authors: Jalal Soleimannejad, Mohammad Asadizeidabadi, Mahmoud Koorki, Mojtaba Azarpira

Abstract:

A significant percentage of production costs is the maintenance costs, and analysis of maintenance costs could to achieve greater productivity and competitiveness. With this is mind, the maintenance of machines and installations is considered as an essential part of organizational functions and applying effective strategies causes significant added value in manufacturing activities. Organizations are trying to achieve performance levels on a global scale with emphasis on creating competitive advantage by different methods consist of RCM (Reliability-Center-Maintenance), TPM (Total Productivity Maintenance) etc. In this study, increasing the capacity of Concentration Plant of Golgohar Iron Ore Mining & Industrial Company (GEG) was examined by using of reliability and maintainability analyses. The results of this research showed that instead of increasing the number of machines (in order to solve the bottleneck problems), the improving of reliability and maintainability would solve bottleneck problems in the best way. It should be mention that in the abovementioned study, the data set of Concentration Plant of GEG as a case study, was applied and analyzed.

Keywords: Bottleneck, Golgohar Iron Ore Mining and Industrial Company, maintainability, maintenance costs, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 927
5461 Analysis of Diverse Cluster Ensemble Techniques

Authors: S. Sarumathi, N. Shanthi, P. Ranjetha

Abstract:

Data mining is the procedure of determining interesting patterns from the huge amount of data. With the intention of accessing the data faster the most supporting processes needed is clustering. Clustering is the process of identifying similarity between data according to the individuality present in the data and grouping associated data objects into clusters. Cluster ensemble is the technique to combine various runs of different clustering algorithms to obtain a general partition of the original dataset, aiming for consolidation of outcomes from a collection of individual clustering outcomes. The performances of clustering ensembles are mainly affecting by two principal factors such as diversity and quality. This paper presents the overview about the different cluster ensemble algorithm along with their methods used in cluster ensemble to improve the diversity and quality in the several cluster ensemble related papers and shows the comparative analysis of different cluster ensemble also summarize various cluster ensemble methods. Henceforth this clear analysis will be very useful for the world of clustering experts and also helps in deciding the most appropriate one to determine the problem in hand.

Keywords: Cluster Ensemble, Consensus Function, CSPA, Diversity, HGPA, MCLA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1798
5460 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology

Authors: Anjian Chen, Joseph C. Chen

Abstract:

This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.

Keywords: Additive manufacturing, fused deposition modeling, surface roughness, Six-Sigma, Taguchi method, 3D printing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331
5459 Line Balancing in the Hard Disk Drive Process Using Simulation Techniques

Authors: Teerapun Saeheaw, Nivit Charoenchai, Wichai Chattinnawat

Abstract:

Simulation model is an easy way to build up models to represent real life scenarios, to identify bottlenecks and to enhance system performance. Using a valid simulation model may give several advantages in creating better manufacturing design in order to improve the system performances. This paper presents result of implementing a simulation model to design hard disk drive manufacturing process by applying line balancing to improve both productivity and quality of hard disk drive process. The line balance efficiency showed 86% decrease in work in process, output was increased by an average of 80%, average time in the system was decreased 86% and waiting time was decreased 90%.

Keywords: line balancing, arena, hard disk drive process, simulation, work in process (WIP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2120
5458 Integration Process of Industrial Design and Engineering Design

Authors: Kazuhide Sugiyama, Hiroshi Osada

Abstract:

Lately management strategy that put Industrial Design (ID) in its core is recognized more important, as technology and price alone cannot differentiate a product. The needs to shorten the time to develop a product also shorten the development period of ID, and it necessitates the ID process management. This research analyzes the status of integration process of ID and Engineering Design (ED) of office equipment that requires the collaboration of ID and ED to clarify the issues for the efficiency of the development and to propose solutions.

Keywords: Industrial Design (ID), Engineering Design (ED), Integration process, Office equipment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1775
5457 Using Perspective Schemata to Model the ETL Process

Authors: Valeria M. Pequeno, Joao Carlos G. M. Pires

Abstract:

Data Warehouses (DWs) are repositories which contain the unified history of an enterprise for decision support. The data must be Extracted from information sources, Transformed and integrated to be Loaded (ETL) into the DW, using ETL tools. These tools focus on data movement, where the models are only used as a means to this aim. Under a conceptual viewpoint, the authors want to innovate the ETL process in two ways: 1) to make clear compatibility between models in a declarative fashion, using correspondence assertions and 2) to identify the instances of different sources that represent the same entity in the real-world. This paper presents the overview of the proposed framework to model the ETL process, which is based on the use of a reference model and perspective schemata. This approach provides the designer with a better understanding of the semantic associated with the ETL process.

Keywords: conceptual data model, correspondence assertions, data warehouse, data integration, ETL process, object relational database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
5456 Assessing Community Participation in Decision-Making Process under Co-Management: A Case Study on Hail Haor, Bangladesh

Authors: R. Ferdous

Abstract:

Power, responsibility sharing, and democratic decision-making are the central ethos to co-management. It is assumed that involving local community in the decision-making process can create a sense of ownership and responsibility of that community and motivate the community towards collective action. But this paper demonstrated that the process to involve local community is not simple and straightforward as it is influenced by structural aspects, power relations among the actors, and social embedded institutions. These factors shape the process in that way who will participate, how they will participate and how the local community maneuvers their agency in the decision-making process. To grasp the complexities that materialize in the process of participation and to understand the inclusionary and exclusionary nature of participation, this paper examines the subjective understanding of different stakeholders concerning participation and furthermore observes the enabling or constraining factors that affect the community to exercise their agency.

Keywords: Participation, social embeddedness, power, structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
5455 An Improved K-Means Algorithm for Gene Expression Data Clustering

Authors: Billel Kenidra, Mohamed Benmohammed

Abstract:

Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.

Keywords: Microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1235