Search results for: Predictive Data Mining
7319 Comparative Study of Universities’ Web Structure Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
This paper is meant to analyze the ranking of University of Malaysia Terengganu, UMT’s website in the World Wide Web. There are only few researches have been done on comparing the ranking of universities’ websites so this research will be able to determine whether the existing UMT’s website is serving its purpose which is to introduce UMT to the world. The ranking is based on hub and authority values which are accordance to the structure of the website. These values are computed using two websearching algorithms, HITS and SALSA. Three other universities’ websites are used as the benchmarks which are UM, Harvard and Stanford. The result is clearly showing that more work has to be done on the existing UMT’s website where important pages according to the benchmarks, do not exist in UMT’s pages. The ranking of UMT’s website will act as a guideline for the web-developer to develop a more efficient website.Keywords: Algorithm, ranking, website, web structure mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16677318 Modeling and Control Design of a Centralized Adaptive Cruise Control System
Authors: Markus Mazzola, Gunther Schaaf
Abstract:
A vehicle driving with an Adaptive Cruise Control System (ACC) is usually controlled decentrally, based on the information of radar systems and in some publications based on C2X-Communication (CACC) to guarantee stable platoons. In this paper we present a Model Predictive Control (MPC) design of a centralized, server-based ACC-System, whereby the vehicular platoon is modeled and controlled as a whole. It is then proven that the proposed MPC design guarantees asymptotic stability and hence string stability of the platoon. The Networked MPC design is chosen to be able to integrate system constraints optimally as well as to reduce the effects of communication delay and packet loss. The performance of the proposed controller is then simulated and analyzed in an LTE communication scenario using the LTE/EPC Network Simulator LENA, which is based on the ns-3 network simulator.
Keywords: Adaptive Cruise Control, Centralized Server, Networked Model Predictive Control, String Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28347317 Modelling of Multi-Agent Systems for the Scheduling of Multi-EV Charging from Power Limited Sources
Authors: Manan’Iarivo Rasolonjanahary, Chris Bingham, Nigel Schofield, Masoud Bazargan
Abstract:
This paper presents the research and application of model predictive scheduled charging of electric vehicles (EV) subject to limited available power resource. To focus on algorithm and operational characteristics, the EV interface to the source is modelled as a battery state equation during the charging operation. The researched methods allow for the priority scheduling of EV charging in a multi-vehicle regime and when subject to limited source power availability. Priority attribution for each connected EV is described. The validity of the developed methodology is shown through the simulation of different scenarios of charging operation of multiple connected EVs including non-scheduled and scheduled operation with various numbers of vehicles. Performance of the developed algorithms is also reported with the recommendation of the choice of suitable parameters.
Keywords: Model predictive control, non-scheduled, power limited sources, scheduled and stop-start battery charging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5127316 Operational risks Classification for Information Systems with Service-Oriented Architecture (Including Loss Calculation Example)
Authors: Irina Pyrlina
Abstract:
This article presents the results of a study conducted to identify operational risks for information systems (IS) with service-oriented architecture (SOA). Analysis of current approaches to risk and system error classifications revealed that the system error classes were never used for SOA risk estimation. Additionally system error classes are not normallyexperimentally supported with realenterprise error data. Through the study several categories of various existing error classifications systems are applied and three new error categories with sub-categories are identified. As a part of operational risks a new error classification scheme is proposed for SOA applications. It is based on errors of real information systems which are service providers for application with service-oriented architecture. The proposed classification approach has been used to classify SOA system errors for two different enterprises (oil and gas industry, metal and mining industry). In addition we have conducted a research to identify possible losses from operational risks.
Keywords: Enterprise architecture, Error classification, Oil&Gas and Metal&Mining industries, Operational risks, Serviceoriented architecture
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16047315 Join and Meet Block Based Default Definite Decision Rule Mining from IDT and an Incremental Algorithm
Authors: Chen Wu, Jingyu Yang
Abstract:
Using maximal consistent blocks of tolerance relation on the universe in incomplete decision table, the concepts of join block and meet block are introduced and studied. Including tolerance class, other blocks such as tolerant kernel and compatible kernel of an object are also discussed at the same time. Upper and lower approximations based on those blocks are also defined. Default definite decision rules acquired from incomplete decision table are proposed in the paper. An incremental algorithm to update default definite decision rules is suggested for effective mining tasks from incomplete decision table into which data is appended. Through an example, we demonstrate how default definite decision rules based on maximal consistent blocks, join blocks and meet blocks are acquired and how optimization is done in support of discernibility matrix and discernibility function in the incomplete decision table.Keywords: rough set, incomplete decision table, maximalconsistent block, default definite decision rule, join and meet block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12887314 Plant Varieties Selection System
Authors: Kitti Koonsanit, Chuleerat Jaruskulchai, Poonsak Miphokasap, Apisit Eiumnoh
Abstract:
In the end of the day, meteorological data and environmental data becomes widely used such as plant varieties selection system. Variety plant selection for planted area is of almost importance for all crops, including varieties of sugarcane. Since sugarcane have many varieties. Variety plant non selection for planting may not be adapted to the climate or soil conditions for planted area. Poor growth, bloom drop, poor fruit, and low price are to be from varieties which were not recommended for those planted area. This paper presents plant varieties selection system for planted areas in Thailand from meteorological data and environmental data by the use of decision tree techniques. With this software developed as an environmental data analysis tool, it can analyze resulting easier and faster. Our software is a front end of WEKA that provides fundamental data mining functions such as classify, clustering, and analysis functions. It also supports pre-processing, analysis, and decision tree output with exporting result. After that, our software can export and display data result to Google maps API in order to display result and plot plant icons effectively.
Keywords: Plant varieties selection system, decision tree, expert recommendation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17937313 Design of an Stable GPC for Nonminimum Phase LTI Systems
Authors: Mahdi Yaghobi, Mohammad Haeri
Abstract:
The current methods of predictive controllers are utilized for those processes in which the rate of output variations is not high. For such processes, therefore, stability can be achieved by implementing the constrained predictive controller or applying infinite prediction horizon. When the rate of the output growth is high (e.g. for unstable nonminimum phase process) the stabilization seems to be problematic. In order to avoid this, it is suggested to change the method in the way that: first, the prediction error growth should be decreased at the early stage of the prediction horizon, and second, the rate of the error variation should be penalized. The growth of the error is decreased through adjusting its weighting coefficients in the cost function. Reduction in the error variation is possible by adding the first order derivate of the error into the cost function. By studying different examples it is shown that using these two remedies together, the closed-loop stability of unstable nonminimum phase process can be achieved.Keywords: GPC, Stability, Varying Weighting Coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12747312 A Survey on Lossless Compression of Bayer Color Filter Array Images
Authors: Alina Trifan, António J. R. Neves
Abstract:
Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.Keywords: Bayer images, CFA, losseless compression, image coding standards.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25457311 Finding an Optimized Discriminate Function for Internet Application Recognition
Authors: E. Khorram, S.M. Mirzababaei
Abstract:
Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.
Keywords: Data Mining, Firewall, Optimization, Packetclassification, Statistical Pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14087310 An Automatic Bayesian Classification System for File Format Selection
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.Keywords: Data mining, digital libraries, digital preservation, file format.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16607309 Comparison of Adsorbents for Ammonia Removal from Mining Wastewater
Authors: Farooq A. Al-Sheikh, Carol Moralejo, Mark Pritzker, William A. Anderson, Ali Elkamel
Abstract:
Ammonia in mining wastewater is a significant problem, and treatment can be especially difficult in cold climates where biological treatment is not feasible. An adsorption process is one of the alternative processes that can be used to reduce ammonia concentrations to acceptable limits, and therefore a LEWATIT resin strongly acidic H+ form ion exchange resin and a Bowie Chabazite Na form AZLB-Na zeolite were tested to assess their effectiveness. For these adsorption tests, two packed bed columns (a mini-column constructed from a 32-cm long x 1-cm diameter piece of glass tubing, and a 60-cm long x 2.5-cm diameter Ace Glass chromatography column) were used containing varying quantities of the adsorbents. A mining wastewater with ammonia concentrations of 22.7 mg/L was fed through the columns at controlled flowrates. In the experimental work, maximum capacities of the LEWATIT ion exchange resin were 0.438, 0.448, and 1.472 mg/g for 3, 6, and 9 g respectively in a mini column and 1.739 mg/g for 141.5 g in a larger Ace column while the capacities for the AZLB-Na zeolite were 0.424, and 0.784 mg/g for 3, and 6 g respectively in the mini column and 1.1636 mg/g for 38.5 g in the Ace column. In the theoretical work, Thomas, Adams-Bohart, and Yoon-Nelson models were constructed to describe a breakthrough curve of the adsorption process and find the constants of the above-mentioned models. In the regeneration tests, 5% hydrochloric acid, HCl (v/v) and 10% sodium hydroxide, NaOH (w/v) were used to regenerate the LEWATIT resin and AZLB-Na zeolite with 44 and 63.8% recovery, respectively. In conclusion, continuous flow adsorption using a LEWATIT ion exchange resin and an AZLB-Na zeolite is efficient when using a co-flow technique for removal of the ammonia from wastewater. Thomas, Adams-Bohart, and Yoon-Nelson models satisfactorily fit the data with R2 closer to 1 in all cases.
Keywords: AZLB-Na zeolite, continuous adsorption, LEWATIT resin, models, regeneration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12367308 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights
Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan
Abstract:
The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyse huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic wellbeing is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that support the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.
Keywords: COVID-19, big data, data analysis, indexing, NoSQL, sharding, scalability, poverty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677307 Customer Churn Prediction: A Cognitive Approach
Authors: Damith Senanayake, Lakmal Muthugama, Laksheen Mendis, Tiroshan Madushanka
Abstract:
Customer churn prediction is one of the most useful areas of study in customer analytics. Due to the enormous amount of data available for such predictions, machine learning and data mining have been heavily used in this domain. There exist many machine learning algorithms directly applicable for the problem of customer churn prediction, and here, we attempt to experiment on a novel approach by using a cognitive learning based technique in an attempt to improve the results obtained by using a combination of supervised learning methods, with cognitive unsupervised learning methods.
Keywords: Growing Self Organizing Maps, Kernel Methods, Churn Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25597306 Injury Prediction for Soccer Players Using Machine Learning
Authors: Amiel Satvedi, Richard Pyne
Abstract:
Injuries in professional sports occur on a regular basis. Some may be minor while others can cause huge impact on a player’s career and earning potential. In soccer, there is a high risk of players picking up injuries during game time. This research work seeks to help soccer players reduce the risk of getting injured by predicting the likelihood of injury while playing in the near future and then providing recommendations for intervention. The injury prediction tool will use a soccer player’s number of minutes played on the field, number of appearances, distance covered and performance data for the current and previous seasons as variables to conduct statistical analysis and provide injury predictive results using a machine learning linear regression model.
Keywords: Injury predictor, soccer injury prevention, machine learning in soccer, big data in soccer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17477305 Distributed Detection and Optimal Traffic-blocking of Network Worms
Authors: Zoran Nikoloski, Narsingh Deo, Ludek Kucera
Abstract:
Despite the recent surge of research in control of worm propagation, currently, there is no effective defense system against such cyber attacks. We first design a distributed detection architecture called Detection via Distributed Blackholes (DDBH). Our novel detection mechanism could be implemented via virtual honeypots or honeynets. Simulation results show that a worm can be detected with virtual honeypots on only 3% of the nodes. Moreover, the worm is detected when less than 1.5% of the nodes are infected. We then develop two control strategies: (1) optimal dynamic trafficblocking, for which we determine the condition that guarantees minimum number of removed nodes when the worm is contained and (2) predictive dynamic traffic-blocking–a realistic deployment of the optimal strategy on scale-free graphs. The predictive dynamic traffic-blocking, coupled with the DDBH, ensures that more than 40% of the network is unaffected by the propagation at the time when the worm is contained.Keywords: Network worms, distributed detection, optimaltraffic-blocking, individual-based simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14387304 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7307303 Network Anomaly Detection using Soft Computing
Authors: Surat Srinoy, Werasak Kurutach, Witcha Chimphlee, Siriporn Chimphlee
Abstract:
One main drawback of intrusion detection system is the inability of detecting new attacks which do not have known signatures. In this paper we discuss an intrusion detection method that proposes independent component analysis (ICA) based feature selection heuristics and using rough fuzzy for clustering data. ICA is to separate these independent components (ICs) from the monitored variables. Rough set has to decrease the amount of data and get rid of redundancy and Fuzzy methods allow objects to belong to several clusters simultaneously, with different degrees of membership. Our approach allows us to recognize not only known attacks but also to detect activity that may be the result of a new, unknown attack. The experimental results on Knowledge Discovery and Data Mining- (KDDCup 1999) dataset.Keywords: Network security, intrusion detection, rough set, ICA, anomaly detection, independent component analysis, rough fuzzy .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19557302 A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints
Authors: Safa Adi
Abstract:
This paper will consider the problem of sequential mining patterns embedded in a database by handling the time constraints as defined in the GSP algorithm (level wise algorithms). We will compare two previous approaches GTC and PSP, that resumes the general principles of GSP. Furthermore this paper will discuss PG-hybrid algorithm, that using PSP and GTC. The results show that PSP and GTC are more efficient than GSP. On the other hand, the GTC algorithm performs better than PSP. The PG-hybrid algorithm use PSP algorithm for the two first passes on the database, and GTC approach for the following scans. Experiments show that the hybrid approach is very efficient for short, frequent sequences.Keywords: Database, GTC algorithm, PSP algorithm, sequential patterns, time constraints.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6997301 Response of the Residential Building Structureon Load Technical Seismicity due to Mining Activities
Authors: V. Salajka, Z. Kaláb, J. Kala, P. Hradil
Abstract:
In the territories where high-intensity earthquakes are frequent is paid attention to the solving of the seismic problems. In the paper are described two computational model variants based on finite element method of the construction with different subsoil simulation (rigid or elastic subsoil) is used. For simulation and calculations program system based on method final elements ANSYS was used. Seismic responses calculations of residential building structure were effected on loading characterized by accelerogram for comparing with the responses spectra method.Keywords: Accelerogram, ANSYS, mining induced seismic, residential building structure, spectra, subsoil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15387300 Improving Classification in Bayesian Networks using Structural Learning
Authors: Hong Choon Ong
Abstract:
Naïve Bayes classifiers are simple probabilistic classifiers. Classification extracts patterns by using data file with a set of labeled training examples and is currently one of the most significant areas in data mining. However, Naïve Bayes assumes the independence among the features. Structural learning among the features thus helps in the classification problem. In this study, the use of structural learning in Bayesian Network is proposed to be applied where there are relationships between the features when using the Naïve Bayes. The improvement in the classification using structural learning is shown if there exist relationship between the features or when they are not independent.Keywords: Bayesian Network, Classification, Naïve Bayes, Structural Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25997299 Topic Modeling Using Latent Dirichlet Allocation and Latent Semantic Indexing on South African Telco Twitter Data
Authors: Phumelele P. Kubheka, Pius A. Owolawi, Gbolahan Aiyetoro
Abstract:
Twitter is one of the most popular social media platforms where users share their opinions on different subjects. Twitter can be considered a great source for mining text due to the high volumes of data generated through the platform daily. Many industries such as telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model in this experiment. A higher topic coherence score indicates better performance of the model.
Keywords: Big data, latent Dirichlet allocation, latent semantic indexing, Telco, topic modeling, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4597298 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)
Authors: Kamal K.Bharadwaj, Rekha Kandwal
Abstract:
An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality
Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14387297 Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network
Authors: Li Kewen, Su Zhaoxin, Wang Xingmou, Zhu Jian Bing
Abstract:
Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development.
Keywords: Convolutional neural network, lithology, prediction of reservoir lithology, seismic attributes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6547296 Increasing the Capacity of Plant Bottlenecks by Using of Improving the Ratio of Mean Time between Failures to Mean Time to Repair
Authors: Jalal Soleimannejad, Mohammad Asadizeidabadi, Mahmoud Koorki, Mojtaba Azarpira
Abstract:
A significant percentage of production costs is the maintenance costs, and analysis of maintenance costs could to achieve greater productivity and competitiveness. With this is mind, the maintenance of machines and installations is considered as an essential part of organizational functions and applying effective strategies causes significant added value in manufacturing activities. Organizations are trying to achieve performance levels on a global scale with emphasis on creating competitive advantage by different methods consist of RCM (Reliability-Center-Maintenance), TPM (Total Productivity Maintenance) etc. In this study, increasing the capacity of Concentration Plant of Golgohar Iron Ore Mining & Industrial Company (GEG) was examined by using of reliability and maintainability analyses. The results of this research showed that instead of increasing the number of machines (in order to solve the bottleneck problems), the improving of reliability and maintainability would solve bottleneck problems in the best way. It should be mention that in the abovementioned study, the data set of Concentration Plant of GEG as a case study, was applied and analyzed.
Keywords: Bottleneck, Golgohar Iron Ore Mining and Industrial Company, maintainability, maintenance costs, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9567295 Field Trial of Resin-Based Composite Materials for the Treatment of Surface Collapses Associated with Former Shallow Coal Mining
Authors: Philip T. Broughton, Mark P. Bettney, Isla L. Smail
Abstract:
Effective treatment of ground instability is essential when managing the impacts associated with historic mining. A field trial was undertaken by the Coal Authority to investigate the geotechnical performance and potential use of composite materials comprising resin and fill or stone to safely treat surface collapses, such as crown-holes, associated with shallow mining. Test pits were loosely filled with various granular fill materials. The fill material was injected with commercially available silicate and polyurethane resin foam products. In situ and laboratory testing was undertaken to assess the geotechnical properties of the resultant composite materials. The test pits were subsequently excavated to assess resin permeation. Drilling and resin injection was easiest through clean limestone fill materials. Recycled building waste fill material proved difficult to inject with resin; this material is thus considered unsuitable for use in resin composites. Incomplete resin permeation in several of the test pits created irregular ‘blocks’ of composite. Injected resin foams significantly improve the stiffness and resistance (strength) of the un-compacted fill material. The stiffness of the treated fill material appears to be a function of the stone particle size, its associated compaction characteristics (under loose tipping) and the proportion of resin foam matrix. The type of fill material is more critical than the type of resin to the geotechnical properties of the composite materials. Resin composites can effectively support typical design imposed loads. Compared to other traditional treatment options, such as cement grouting, the use of resin composites is potentially less disruptive, particularly for sites with limited access, and thus likely to achieve significant reinstatement cost savings. The use of resin composites is considered a suitable option for the future treatment of shallow mining collapses.
Keywords: Composite material, ground improvement, mining legacy, resin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15417294 Opinion Mining Framework in the Education Domain
Authors: A. M. H. Elyasir, K. S. M. Anbananthen
Abstract:
The internet is growing larger and becoming the most popular platform for the people to share their opinion in different interests. We choose the education domain specifically comparing some Malaysian universities against each other. This comparison produces benchmark based on different criteria shared by the online users in various online resources including Twitter, Facebook and web pages. The comparison is accomplished using opinion mining framework to extract, process the unstructured text and classify the result to positive, negative or neutral (polarity). Hence, we divide our framework to three main stages; opinion collection (extraction), unstructured text processing and polarity classification. The extraction stage includes web crawling, HTML parsing, Sentence segmentation for punctuation classification, Part of Speech (POS) tagging, the second stage processes the unstructured text with stemming and stop words removal and finally prepare the raw text for classification using Named Entity Recognition (NER). Last phase is to classify the polarity and present overall result for the comparison among the Malaysian universities. The final result is useful for those who are interested to study in Malaysia, in which our final output declares clear winners based on the public opinions all over the web.
Keywords: Entity Recognition, Education Domain, Opinion Mining, Unstructured Text.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29657293 An Efficient and Generic Hybrid Framework for High Dimensional Data Clustering
Authors: Dharmveer Singh Rajput , P. K. Singh, Mahua Bhattacharya
Abstract:
Clustering in high dimensional space is a difficult problem which is recurrent in many fields of science and engineering, e.g., bioinformatics, image processing, pattern reorganization and data mining. In high dimensional space some of the dimensions are likely to be irrelevant, thus hiding the possible clustering. In very high dimensions it is common for all the objects in a dataset to be nearly equidistant from each other, completely masking the clusters. Hence, performance of the clustering algorithm decreases. In this paper, we propose an algorithmic framework which combines the (reduct) concept of rough set theory with the k-means algorithm to remove the irrelevant dimensions in a high dimensional space and obtain appropriate clusters. Our experiment on test data shows that this framework increases efficiency of the clustering process and accuracy of the results.Keywords: High dimensional clustering, sub-space, k-means, rough set, discernibility matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19497292 A Note on Metallurgy at Khanak: An Indus Site in Tosham Mining Area, Haryana
Authors: Ravindra N. Singh, Dheerendra P. Singh
Abstract:
Recent discoveries of Bronze Age artefacts, tin slag, furnaces and crucibles, together with new geological evidence on tin deposits in Tosham area of Bhiwani district in Haryana (India) provide the opportunity to survey the evidence for possible sources of tin and the use of bronze in the Harappan sites of north western India. Earlier, Afghanistan emerged as the most promising eastern source of tin utilized by Indus Civilization copper-smiths. Our excavations conducted at Khanak near Tosham mining area during 2014 and 2016 revealed ample evidence of metallurgical activities as attested by the occurrence of slag, ores and evidences of ashes and fragments of furnaces in addition to the bronze objects. We have conducted petrological, XRD, EDAX, TEM, SEM and metallography on the slag, ores, crucible fragments and bronze objects samples recovered from Khanak excavations. This has given positive indication of mining and metallurgy of poly-mettalic Tin at the site; however, it can only be ascertained after the detailed scientific examination of the materials which is underway. In view of the importance of site, we intend to excavate the site horizontally in future so as to obtain more samples for scientific studies.
Keywords: Archaeometallurgy, problem of tin, metallography, Indus civilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20127291 DCBOR: A Density Clustering Based on Outlier Removal
Authors: A. M. Fahim, G. Saake, A. M. Salem, F. A. Torkey, M. A. Ramadan
Abstract:
Data clustering is an important data exploration technique with many applications in data mining. We present an enhanced version of the well known single link clustering algorithm. We will refer to this algorithm as DCBOR. The proposed algorithm alleviates the chain effect by removing the outliers from the given dataset. So this algorithm provides outlier detection and data clustering simultaneously. This algorithm does not need to update the distance matrix, since the algorithm depends on merging the most k-nearest objects in one step and the cluster continues grow as long as possible under specified condition. So the algorithm consists of two phases; at the first phase, it removes the outliers from the input dataset. At the second phase, it performs the clustering process. This algorithm discovers clusters of different shapes, sizes, densities and requires only one input parameter; this parameter represents a threshold for outlier points. The value of the input parameter is ranging from 0 to 1. The algorithm supports the user in determining an appropriate value for it. We have tested this algorithm on different datasets contain outlier and connecting clusters by chain of density points, and the algorithm discovers the correct clusters. The results of our experiments demonstrate the effectiveness and the efficiency of DCBOR.Keywords: Data Clustering, Clustering Algorithms, Handling Noise, Arbitrary Shape of Clusters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19337290 Literature-Based Discoveries in Lupus Treatment
Authors: Oluwaseyi Jaiyeoba, Vetria Byrd
Abstract:
Systemic lupus erythematosus (aka lupus) is a chronic disease known for its chameleon-like ability to mimic symptoms of other diseases rendering it hard to detect, diagnose and treat. The heterogeneous nature of the disease generates disparate data that are often multifaceted and multi-dimensional. Musculoskeletal manifestation of lupus is one of the most common clinical manifestations of lupus. This research links disparate literature on the treatment of lupus as it affects the musculoskeletal system using the discoveries from literature-based research articles available on the PubMed database. Several Natural Language Processing (NPL) tools exist to connect disjointed but related literature, such as Connected Papers, Bitola, and Gopalakrishnan. Literature-based discovery (LBD) has been used to bridge unconnected disciplines based on text mining procedures. The technical/medical literature consists of many technical/medical concepts, each having its sub-literature. This approach has been used to link Parkinson’s, Raynaud, and Multiple Sclerosis treatment within works of literature. Literature-based discovery methods can connect two or more related but disjointed literature concepts to produce a novel and plausible approach to solving a research problem. Data visualization techniques with the help of natural language processing tools are used to visually represent the result of literature-based discoveries. Literature search results can be voluminous, but Data visualization processes can provide insight and detect subtle patterns in large data. These insights and patterns can lead to discoveries that would have otherwise been hidden from disjointed literature. In this research, literature data are mined and combined with visualization techniques for heterogeneous data to discover viable treatments reported in the literature for lupus expression in the musculoskeletal system. This research answers the question of using literature-based discovery to identify potential treatments for a multifaceted disease like lupus. A three-pronged methodology is used in this research: text mining, natural language processing, and data visualization. These three research-related fields are employed to identify patterns in lupus-related data that, when visually represented, could aid research in the treatment of lupus. This work introduces a method for visually representing interconnections of various lupus-related literature. The methodology outlined in this work is the first step toward literature-based research and treatment planning for the musculoskeletal manifestation of lupus. The results also outline the interconnection of complex, disparate data associated with the manifestation of lupus in the musculoskeletal system. The societal impact of this work is broad. Advances in this work will improve the quality of life for millions of persons in the workforce currently diagnosed and silently living with a musculoskeletal disease associated with lupus.
Keywords: Systemic lupus erythematosus, LBD, Data Visualization, musculoskeletal system, treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 506