Search results for: process developed data warehouse.
13938 Developments for ''Virtual'' Monitoring and Process Simulation of the Cryogenic Pilot Plant
Authors: Carmen Maria Moraru, Iuliana Stefan, Ovidiu Balteanu, Ciprian Bucur, Liviu Stefan, Anisia Bornea, Ioan Stefanescu
Abstract:
The implementation of the new software and hardware-s technologies for tritium processing nuclear plants, and especially those with an experimental character or of new technology developments shows a coefficient of complexity due to issues raised by the implementation of the performing instrumentation and equipment into a unitary monitoring system of the nuclear technological process of tritium removal. Keeping the system-s flexibility is a demand of the nuclear experimental plants for which the change of configuration, process and parameters is something usual. The big amount of data that needs to be processed stored and accessed for real time simulation and optimization demands the achievement of the virtual technologic platform where the data acquiring, control and analysis systems of the technological process can be integrated with a developed technological monitoring system. Thus, integrated computing and monitoring systems needed for the supervising of the technological process will be executed, to be continued with the execution of optimization system, by choosing new and performed methods corresponding to the technological processes within the tritium removal processing nuclear plants. The developing software applications is executed with the support of the program packages dedicated to industrial processes and they will include acquisition and monitoring sub-modules, named “virtually" as well as the storage sub-module of the process data later required for the software of optimization and simulation of the technological process for tritium removal. The system plays and important role in the environment protection and durable development through new technologies, that is – the reduction of and fight against industrial accidents in the case of tritium processing nuclear plants. Research for monitoring optimisation of nuclear processes is also a major driving force for economic and social development.
Keywords: Monitoring system, process simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197313937 Decision Support System for Flood Crisis Management using Artificial Neural Network
Authors: Muhammad Aqil, Ichiro Kita, Akira Yano, Nishiyama Soichi
Abstract:
This paper presents an alternate approach that uses artificial neural network to simulate the flood level dynamics in a river basin. The algorithm was developed in a decision support system environment in order to enable users to process the data. The decision support system is found to be useful due to its interactive nature, flexibility in approach and evolving graphical feature and can be adopted for any similar situation to predict the flood level. The main data processing includes the gauging station selection, input generation, lead-time selection/generation, and length of prediction. This program enables users to process the flood level data, to train/test the model using various inputs and to visualize results. The program code consists of a set of files, which can as well be modified to match other purposes. This program may also serve as a tool for real-time flood monitoring and process control. The running results indicate that the decision support system applied to the flood level seems to have reached encouraging results for the river basin under examination. The comparison of the model predictions with the observed data was satisfactory, where the model is able to forecast the flood level up to 5 hours in advance with reasonable prediction accuracy. Finally, this program may also serve as a tool for real-time flood monitoring and process control.Keywords: Decision Support System, Neural Network, Flood Level
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162613936 Optimal Bayesian Control of the Proportion of Defectives in a Manufacturing Process
Authors: Viliam Makis, Farnoosh Naderkhani, Leila Jafari
Abstract:
In this paper, we present a model and an algorithm for the calculation of the optimal control limit, average cost, sample size, and the sampling interval for an optimal Bayesian chart to control the proportion of defective items produced using a semi-Markov decision process approach. Traditional p-chart has been widely used for controlling the proportion of defectives in various kinds of production processes for many years. It is well known that traditional non-Bayesian charts are not optimal, but very few optimal Bayesian control charts have been developed in the literature, mostly considering finite horizon. The objective of this paper is to develop a fast computational algorithm to obtain the optimal parameters of a Bayesian p-chart. The decision problem is formulated in the partially observable framework and the developed algorithm is illustrated by a numerical example.Keywords: Bayesian control chart, semi-Markov decision process, quality control, partially observable process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 116913935 A Quantitative Approach to Strategic Design of Component-Based Business Process Models
Authors: Eakong Atiptamvaree, Twittie Senivongse
Abstract:
A new paradigm for software design and development models software by its business process, translates the model into a process execution language, and has it run by a supporting execution engine. This process-oriented paradigm promotes modeling of software by less technical users or business analysts as well as rapid development. Since business process models may be shared by different organizations and sometimes even by different business domains, it is interesting to apply a technique used in traditional software component technology to design reusable business processes. This paper discusses an approach to apply a technique for software component fabrication to the design of process-oriented software units, called process components. These process components result from decomposing a business process of a particular application domain into subprocesses with an aim that the process components can be reusable in different process-based software models. The approach is quantitative because the quality of process component design is measured from technical features of the process components. The approach is also strategic because the measured quality is determined against business-oriented component management goals. A software tool has been developed to measure how good a process component design is, according to the required managerial goals and comparing to other designs. We also discuss how we benefit from reusable process components.
Keywords: Business process model, process component, component management goals, measurement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167613934 Perception-Oriented Model Driven Development for Designing Data Acquisition Process in Wireless Sensor Networks
Authors: K. Indra Gandhi
Abstract:
Wireless Sensor Networks (WSNs) have always been characterized for application-specific sensing, relaying and collection of information for further analysis. However, software development was not considered as a separate entity in this process of data collection which has posed severe limitations on the software development for WSN. Software development for WSN is a complex process since the components involved are data-driven, network-driven and application-driven in nature. This implies that there is a tremendous need for the separation of concern from the software development perspective. A layered approach for developing data acquisition design based on Model Driven Development (MDD) has been proposed as the sensed data collection process itself varies depending upon the application taken into consideration. This work focuses on the layered view of the data acquisition process so as to ease the software point of development. A metamodel has been proposed that enables reusability and realization of the software development as an adaptable component for WSN systems. Further, observing users perception indicates that proposed model helps in improving the programmer's productivity by realizing the collaborative system involved.
Keywords: Model-driven development, wireless sensor networks, data acquisition, separation of concern, layered design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95713933 DEA ANN Approach in Supplier Evaluation System
Authors: Dilek Özdemir, Gül Tekin Temur
Abstract:
In Supply Chain Management (SCM), strengthening partnerships with suppliers is a significant factor for enhancing competitiveness. Hence, firms increasingly emphasize supplier evaluation processes. Supplier evaluation systems are basically developed in terms of criteria such as quality, cost, delivery, and flexibility. Because there are many variables to be analyzed, this process becomes hard to execute and needs expertise. On this account, this study aims to develop an expert system on supplier evaluation process by designing Artificial Neural Network (ANN) that is supported with Data Envelopment Analysis (DEA). The methods are applied on the data of 24 suppliers, which have longterm relationships with a medium sized company from German Iron and Steel Industry. The data of suppliers consists of variables such as material quality (MQ), discount of amount (DOA), discount of cash (DOC), payment term (PT), delivery time (DT) and annual revenue (AR). Meanwhile, the efficiency that is generated by using DEA is added to the supplier evaluation system in order to use them as system outputs.
Keywords: Artificial Neural Network (ANN), DataEnvelopment Analysis (DEA), Supplier Evaluation System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215313932 Approximate Frequent Pattern Discovery Over Data Stream
Authors: Kittisak Kerdprasop, Nittaya Kerdprasop
Abstract:
Frequent pattern discovery over data stream is a hard problem because a continuously generated nature of stream does not allow a revisit on each data element. Furthermore, pattern discovery process must be fast to produce timely results. Based on these requirements, we propose an approximate approach to tackle the problem of discovering frequent patterns over continuous stream. Our approximation algorithm is intended to be applied to process a stream prior to the pattern discovery process. The results of approximate frequent pattern discovery have been reported in the paper.Keywords: Frequent pattern discovery, Approximate algorithm, Data stream analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134213931 Project Selection Using Fuzzy Group Analytic Network Process
Authors: Hamed Rafiei, Masoud Rabbani
Abstract:
This paper deals with the project selection problem. Project selection problem is one of the problems arose firstly in the field of operations research following some production concepts from primary product mix problem. Afterward, introduction of managerial considerations into the project selection problem have emerged qualitative factors and criteria to be regarded as well as quantitative ones. To overcome both kinds of criteria, an analytic network process is developed in this paper enhanced with fuzzy sets theory to tackle the vagueness of experts- comments to evaluate the alternatives. Additionally, a modified version of Least-Square method through a non-linear programming model is augmented to the developed group decision making structure in order to elicit the final weights from comparison matrices. Finally, a case study is considered by which developed structure in this paper is validated. Moreover, a sensitivity analysis is performed to validate the response of the model with respect to the condition alteration.
Keywords: Analytic network process, Fuzzy sets theory, Nonlinear programming, Project selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 176913930 Yield Prediction Using Support Vectors Based Under-Sampling in Semiconductor Process
Authors: Sae-Rom Pak, Seung Hwan Park, Jeong Ho Cho, Daewoong An, Cheong-Sool Park, Jun Seok Kim, Jun-Geol Baek
Abstract:
It is important to predict yield in semiconductor test process in order to increase yield. In this study, yield prediction means finding out defective die, wafer or lot effectively. Semiconductor test process consists of some test steps and each test includes various test items. In other world, test data has a big and complicated characteristic. It also is disproportionably distributed as the number of data belonging to FAIL class is extremely low. For yield prediction, general data mining techniques have a limitation without any data preprocessing due to eigen properties of test data. Therefore, this study proposes an under-sampling method using support vector machine (SVM) to eliminate an imbalanced characteristic. For evaluating a performance, randomly under-sampling method is compared with the proposed method using actual semiconductor test data. As a result, sampling method using SVM is effective in generating robust model for yield prediction.
Keywords: Yield Prediction, Semiconductor Test Process, Support Vector Machine, Under Sampling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 239713929 Using Data Mining in Automotive Safety
Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler
Abstract:
Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.
Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 284413928 Zero Truncated Strict Arcsine Model
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.
Keywords: Hurdle models, maximum likelihood estimation method, positive count data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185713927 Combining Fuzzy Logic and Neural Networks in Modeling Landfill Gas Production
Authors: Mohamed Abdallah, Mostafa Warith, Roberto Narbaitz, Emil Petriu, Kevin Kennedy
Abstract:
Heterogeneity of solid waste characteristics as well as the complex processes taking place within the landfill ecosystem motivated the implementation of soft computing methodologies such as artificial neural networks (ANN), fuzzy logic (FL), and their combination. The present work uses a hybrid ANN-FL model that employs knowledge-based FL to describe the process qualitatively and implements the learning algorithm of ANN to optimize model parameters. The model was developed to simulate and predict the landfill gas production at a given time based on operational parameters. The experimental data used were compiled from lab-scale experiment that involved various operating scenarios. The developed model was validated and statistically analyzed using F-test, linear regression between actual and predicted data, and mean squared error measures. Overall, the simulated landfill gas production rates demonstrated reasonable agreement with actual data. The discussion focused on the effect of the size of training datasets and number of training epochs.
Keywords: Adaptive neural fuzzy inference system (ANFIS), gas production, landfill
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 241513926 Powerful Tool to Expand Business Intelligence: Text Mining
Authors: Li Gao, Elizabeth Chang, Song Han
Abstract:
With the extensive inclusion of document, especially text, in the business systems, data mining does not cover the full scope of Business Intelligence. Data mining cannot deliver its impact on extracting useful details from the large collection of unstructured and semi-structured written materials based on natural languages. The most pressing issue is to draw the potential business intelligence from text. In order to gain competitive advantages for the business, it is necessary to develop the new powerful tool, text mining, to expand the scope of business intelligence. In this paper, we will work out the strong points of text mining in extracting business intelligence from huge amount of textual information sources within business systems. We will apply text mining to each stage of Business Intelligence systems to prove that text mining is the powerful tool to expand the scope of BI. After reviewing basic definitions and some related technologies, we will discuss the relationship and the benefits of these to text mining. Some examples and applications of text mining will also be given. The motivation behind is to develop new approach to effective and efficient textual information analysis. Thus we can expand the scope of Business Intelligence using the powerful tool, text mining.Keywords: Business intelligence, document warehouse, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 266013925 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation
Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone
Abstract:
Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.
Keywords: Digital Twin, Distribution System Operator, Electrical Distribution System, Smart Grid Controller, Supervisory Control and Data Acquisition System, Smart Recursive Load Flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25413924 Fault Detection of Drinking Water Treatment Process Using PCA and Hotelling's T2 Chart
Authors: Joval P George, Dr. Zheng Chen, Philip Shaw
Abstract:
This paper deals with the application of Principal Component Analysis (PCA) and the Hotelling-s T2 Chart, using data collected from a drinking water treatment process. PCA is applied primarily for the dimensional reduction of the collected data. The Hotelling-s T2 control chart was used for the fault detection of the process. The data was taken from a United Utilities Multistage Water Treatment Works downloaded from an Integrated Program Management (IPM) dashboard system. The analysis of the results show that Multivariate Statistical Process Control (MSPC) techniques such as PCA, and control charts such as Hotelling-s T2, can be effectively applied for the early fault detection of continuous multivariable processes such as Drinking Water Treatment. The software package SIMCA-P was used to develop the MSPC models and Hotelling-s T2 Chart from the collected data.
Keywords: Principal component analysis, hotelling's t2 chart, multivariate statistical process control, drinking water treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 278513923 Use of Bayesian Network in Information Extraction from Unstructured Data Sources
Authors: Quratulain N. Rajput, Sajjad Haider
Abstract:
This paper applies Bayesian Networks to support information extraction from unstructured, ungrammatical, and incoherent data sources for semantic annotation. A tool has been developed that combines ontologies, machine learning, and information extraction and probabilistic reasoning techniques to support the extraction process. Data acquisition is performed with the aid of knowledge specified in the form of ontology. Due to the variable size of information available on different data sources, it is often the case that the extracted data contains missing values for certain variables of interest. It is desirable in such situations to predict the missing values. The methodology, presented in this paper, first learns a Bayesian network from the training data and then uses it to predict missing data and to resolve conflicts. Experiments have been conducted to analyze the performance of the presented methodology. The results look promising as the methodology achieves high degree of precision and recall for information extraction and reasonably good accuracy for predicting missing values.Keywords: Information Extraction, Bayesian Network, ontology, Machine Learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 223213922 Artificial Intelligence Model to Predict Surface Roughness of Ti-15-3 Alloy in EDM Process
Authors: Md. Ashikur Rahman Khan, M. M. Rahman, K. Kadirgama, M.A. Maleque, Rosli A. Bakar
Abstract:
Conventionally the selection of parameters depends intensely on the operator-s experience or conservative technological data provided by the EDM equipment manufacturers that assign inconsistent machining performance. The parameter settings given by the manufacturers are only relevant with common steel grades. A single parameter change influences the process in a complex way. Hence, the present research proposes artificial neural network (ANN) models for the prediction of surface roughness on first commenced Ti-15-3 alloy in electrical discharge machining (EDM) process. The proposed models use peak current, pulse on time, pulse off time and servo voltage as input parameters. Multilayer perceptron (MLP) with three hidden layer feedforward networks are applied. An assessment is carried out with the models of distinct hidden layer. Training of the models is performed with data from an extensive series of experiments utilizing copper electrode as positive polarity. The predictions based on the above developed models have been verified with another set of experiments and are found to be in good agreement with the experimental results. Beside this they can be exercised as precious tools for the process planning for EDM.Keywords: Ti-15l-3, surface roughness, copper, positive polarity, multi-layered perceptron.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190713921 A System for Analyzing and Eliciting Public Grievances Using Cache Enabled Big Data
Authors: P. Kaladevi, N. Giridharan
Abstract:
The system for analyzing and eliciting public grievances serves its main purpose to receive and process all sorts of complaints from the public and respond to users. Due to the more number of complaint data becomes big data which is difficult to store and process. The proposed system uses HDFS to store the big data and uses MapReduce to process the big data. The concept of cache was applied in the system to provide immediate response and timely action using big data analytics. Cache enabled big data increases the response time of the system. The unstructured data provided by the users are efficiently handled through map reduce algorithm. The processing of complaints takes place in the order of the hierarchy of the authority. The drawbacks of the traditional database system used in the existing system are set forth by our system by using Cache enabled Hadoop Distributed File System. MapReduce framework codes have the possible to leak the sensitive data through computation process. We propose a system that add noise to the output of the reduce phase to avoid signaling the presence of sensitive data. If the complaints are not processed in the ample time, then automatically it is forwarded to the higher authority. Hence it ensures assurance in processing. A copy of the filed complaint is sent as a digitally signed PDF document to the user mail id which serves as a proof. The system report serves to be an essential data while making important decisions based on legislation.Keywords: Big Data, Hadoop, HDFS, Caching, MapReduce, web personalization, e-governance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159213920 Evaluation of Low-Reducible Sinter in Blast Furnace Technology by Mathematical Model Developed at Centre ENET, VSB – Technical University of Ostrava
Authors: S. Jursová, P. Pustějovská, S. Brožová, J. Bilík
Abstract:
The paper deals with possibilities of interpretation of iron ore reducibility tests. It presents a mathematical model developed at Centre ENET, VŠB – Technical University of Ostrava, Czech Republic for an evaluation of metallurgical material of blast furnace feedstock such as iron ore, sinter or pellets. According to the data from the test, the model predicts its usage in blast furnace technology and its effects on production parameters of shaft aggregate. At the beginning, the paper sums up the general concept and experience in mathematical modelling of iron ore reduction. It presents basic equation for the calculation and the main parts of the developed model. In the experimental part, there is an example of usage of the mathematical model. The paper describes the usage of data for some predictive calculation. There are presented material, method of carried test of iron ore reducibility. Then there are graphically interpreted effects of used material on carbon consumption, rate of direct reduction and the whole reduction process.
Keywords: Blast furnace technology, iron ore reduction, mathematical model, prediction of iron ore reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 199113919 Risk Classification of SMEs by Early Warning Model Based on Data Mining
Authors: Nermin Ozgulbas, Ali Serhan Koyuncugil
Abstract:
One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an Early Warning System (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7,853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation.
Keywords: Early Warning Systems, Data Mining, Financial Risk, SMEs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 338713918 A Critics Study of Neural Networks Applied to ion-Exchange Process
Authors: John Kabuba, Antoine Mulaba-Bafubiandi, Kim Battle
Abstract:
This paper presents a critical study about the application of Neural Networks to ion-exchange process. Ionexchange is a complex non-linear process involving many factors influencing the ions uptake mechanisms from the pregnant solution. The following step includes the elution. Published data presents empirical isotherm equations with definite shortcomings resulting in unreliable predictions. Although Neural Network simulation technique encounters a number of disadvantages including its “black box", and a limited ability to explicitly identify possible causal relationships, it has the advantage to implicitly handle complex nonlinear relationships between dependent and independent variables. In the present paper, the Neural Network model based on the back-propagation algorithm Levenberg-Marquardt was developed using a three layer approach with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and linear transfer function (purelin) at out layer. The above mentioned approach has been used to test the effectiveness in simulating ion exchange processes. The modeling results showed that there is an excellent agreement between the experimental data and the predicted values of copper ions removed from aqueous solutions.Keywords: Copper, ion-exchange process, neural networks, simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163113917 Information Technologies in Automotive Assembly Industry in Thailand
Authors: Jirarat Teeravaraprug, Usawadee Inklay
Abstract:
This paper gave an attempt in prioritizing information technologies that organizations should give concentration. The case study was organizations in the automotive assembly industry in Thailand. Data were first collected to gather all information technologies known and used in the automotive assembly industry in Thailand. Five experts from the industries were surveyed based on the concept of fuzzy DEMATEL. The information technologies were categorized into six groups, which were communication, transaction, planning, organization management, warehouse management, and transportation. The cause groups of information technologies for each group were analyzed and presented. Moreover, the relationship between the used and the significant information technologies was given. Discussions based on the used information technologies and the research results are given.
Keywords: Information technology, automotive assembly industry, fuzzy DEMATEL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171913916 Contact Drying Simulation of Particulate Materials: A Comprehensive Approach
Authors: Marco Intelvi, Apolinar Picado, Joaquín Martínez
Abstract:
In this work, simulation algorithms for contact drying of agitated particulate materials under vacuum and at atmospheric pressure were developed. The implementation of algorithms gives a predictive estimation of drying rate curves and bulk bed temperature during contact drying. The calculations are based on the penetration model to describe the drying process, where all process parameters such as heat and mass transfer coefficients, effective bed properties, gas and liquid phase properties are estimated with proper correlations. Simulation results were compared with experimental data from the literature. In both cases, simulation results were in good agreement with experimental data. Few deviations were identified and the limitations of the predictive capabilities of the models are discussed. The programs give a good insight of the drying behaviour of the analysed powders.Keywords: Agitated bed, Atmospheric pressure, Penetrationmodel, Vacuum
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 224213915 Introducing Fast Robot Roller Hemming Process in Automotive Industry
Authors: Babak Saboori, Behzad Saboori, Johan S. Carlson, Rikard Söderberg
Abstract:
As product life cycle becomes less and less every day, having flexible manufacturing processes for any companies seems more demanding. In the assembling of closures, i.e. opening parts in car body, hemming process is the one which needs more attention. This paper focused on the robot roller hemming process and how to reduce its cycle time by introducing a fast roller hemming process. A robot roller hemming process of a tailgate of Saab 93 SportCombi model is investigated as a case study in this paper. By applying task separation, robot coordination, and robot cell configuration principles in the roller hemming process, three alternatives are proposed, developed, and remarkable reduction in cycle times achieved [1].Keywords: Cell configuration, cycle time, robot coordination, roller hemming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 407413914 Research of Data Cleaning Methods Based on Dependency Rules
Authors: Yang Bao, Shi Wei Deng, Wang Qun Lin
Abstract:
This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSql), and gives 6 data cleaning methods based on these algorithms.Keywords: Data cleaning, dependency rules, violation data discovery, data repair.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 261213913 A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing
Authors: Youngji Yoo, Seung Hwan Park, Daewoong An, Sung-Shick Kim, Jun-Geol Baek
Abstract:
The yield management system is very important to produce high-quality semiconductor chips in the semiconductor manufacturing process. In order to improve quality of semiconductors, various tests are conducted in the post fabrication (FAB) process. During the test process, large amount of data are collected and the data includes a lot of information about defect. In general, the defect on the wafer is the main causes of yield loss. Therefore, analyzing the defect data is necessary to improve performance of yield prediction. The wafer bin map (WBM) is one of the data collected in the test process and includes defect information such as the fail bit patterns. The fail bit has characteristics of spatial point patterns. Therefore, this paper proposes the feature extraction method using the spatial point pattern analysis. Actual data obtained from the semiconductor process is used for experiments and the experimental result shows that the proposed method is more accurately recognize the fail bit patterns.
Keywords: Semiconductor, wafer bin map (WBM), feature extraction, spatial point patterns, contour map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 250013912 An Improved Data Mining Method Applied to the Search of Relationship between Metabolic Syndrome and Lifestyles
Authors: Yi Chao Huang, Yu Ling Liao, Chiu Shuang Lin
Abstract:
A data cutting and sorting method (DCSM) is proposed to optimize the performance of data mining. DCSM reduces the calculation time by getting rid of redundant data during the data mining process. In addition, DCSM minimizes the computational units by splitting the database and by sorting data with support counts. In the process of searching for the relationship between metabolic syndrome and lifestyles with the health examination database of an electronics manufacturing company, DCSM demonstrates higher search efficiency than the traditional Apriori algorithm in tests with different support counts.Keywords: Data mining, Data cutting and sorting method, Apriori algorithm, Metabolic syndrome
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158813911 Flexible, Adaptable and Scaleable Business Rules Management System for Data Validation
Authors: Kashif Kamran, Farooque Azam
Abstract:
The policies governing the business of any organization are well reflected in her business rules. The business rules are implemented by data validation techniques, coded during the software development process. Any change in business policies results in change in the code written for data validation used to enforce the business policies. Implementing the change in business rules without changing the code is the objective of this paper. The proposed approach enables users to create rule sets at run time once the software has been developed. The newly defined rule sets by end users are associated with the data variables for which the validation is required. The proposed approach facilitates the users to define business rules using all the comparison operators and Boolean operators. Multithreading is used to validate the data entered by end user against the business rules applied. The evaluation of the data is performed by a newly created thread using an enhanced form of the RPN (Reverse Polish Notation) algorithm.Keywords: Business Rules, data validation, multithreading, Reverse Polish Notation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 227113910 Prediction of Compressive Strength of SCC Containing Bottom Ash using Artificial Neural Networks
Authors: Yogesh Aggarwal, Paratibha Aggarwal
Abstract:
The paper presents a comparative performance of the models developed to predict 28 days compressive strengths using neural network techniques for data taken from literature (ANN-I) and data developed experimentally for SCC containing bottom ash as partial replacement of fine aggregates (ANN-II). The data used in the models are arranged in the format of six and eight input parameters that cover the contents of cement, sand, coarse aggregate, fly ash as partial replacement of cement, bottom ash as partial replacement of sand, water and water/powder ratio, superplasticizer dosage and an output parameter that is 28-days compressive strength and compressive strengths at 7 days, 28 days, 90 days and 365 days, respectively for ANN-I and ANN-II. The importance of different input parameters is also given for predicting the strengths at various ages using neural network. The model developed from literature data could be easily extended to the experimental data, with bottom ash as partial replacement of sand with some modifications.Keywords: Self compacting concrete, bottom ash, strength, prediction, neural network, importance factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222613909 Thailand National Biodiversity Database System with webMathematica and Google Earth
Authors: W. Katsarapong, W. Srisang, K. Jaroensutasinee, M. Jaroensutasinee
Abstract:
National Biodiversity Database System (NBIDS) has been developed for collecting Thai biodiversity data. The goal of this project is to provide advanced tools for querying, analyzing, modeling, and visualizing patterns of species distribution for researchers and scientists. NBIDS data record two types of datasets: biodiversity data and environmental data. Biodiversity data are specie presence data and species status. The attributes of biodiversity data can be further classified into two groups: universal and projectspecific attributes. Universal attributes are attributes that are common to all of the records, e.g. X/Y coordinates, year, and collector name. Project-specific attributes are attributes that are unique to one or a few projects, e.g., flowering stage. Environmental data include atmospheric data, hydrology data, soil data, and land cover data collecting by using GLOBE protocols. We have developed webbased tools for data entry. Google Earth KML and ArcGIS were used as tools for map visualization. webMathematica was used for simple data visualization and also for advanced data analysis and visualization, e.g., spatial interpolation, and statistical analysis. NBIDS will be used by park rangers at Khao Nan National Park, and researchers.Keywords: GLOBE protocol, Biodiversity, Database System, ArcGIS, Google Earth and webMathematica.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983