Search results for: agent based web content mining
33420 Determination of the Risks of Heart Attack at the First Stage as Well as Their Control and Resource Planning with the Method of Data Mining
Authors: İbrahi̇m Kara, Seher Arslankaya
Abstract:
Frequently preferred in the field of engineering in particular, data mining has now begun to be used in the field of health as well since the data in the health sector have reached great dimensions. With data mining, it is aimed to reveal models from the great amounts of raw data in agreement with the purpose and to search for the rules and relationships which will enable one to make predictions about the future from the large amount of data set. It helps the decision-maker to find the relationships among the data which form at the stage of decision-making. In this study, it is aimed to determine the risk of heart attack at the first stage, to control it, and to make its resource planning with the method of data mining. Through the early and correct diagnosis of heart attacks, it is aimed to reveal the factors which affect the diseases, to protect health and choose the right treatment methods, to reduce the costs in health expenditures, and to shorten the durations of patients’ stay at hospitals. In this way, the diagnosis and treatment costs of a heart attack will be scrutinized, which will be useful to determine the risk of the disease at the first stage, to control it, and to make its resource planning.Keywords: data mining, decision support systems, heart attack, health sector
Procedia PDF Downloads 35633419 From Text to Data: Sentiment Analysis of Presidential Election Political Forums
Authors: Sergio V Davalos, Alison L. Watkins
Abstract:
User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.Keywords: sentiment analysis, text mining, user generated content, US presidential elections
Procedia PDF Downloads 19233418 Phillips Curve Estimation in an Emerging Economy: Evidence from Sub-National Data of Indonesia
Authors: Harry Aginta
Abstract:
Using Phillips curve framework, this paper seeks for new empirical evidence on the relationship between inflation and output in a major emerging economy. By exploiting sub-national data, the contribution of this paper is threefold. First, it resolves the issue of using on-target national inflation rates that potentially causes weakening inflation-output nexus. This is very relevant for Indonesia as its central bank has been adopting inflation targeting framework based on national consumer price index (CPI) inflation. Second, the study tests the relevance of mining sector in output gap estimation. The test for mining sector is important to control for the effects of mining regulation and nominal effects of coal prices on real economic activities. Third, the paper applies panel econometric method by incorporating regional variation that help to improve model estimation. The results from this paper confirm the strong presence of Phillips curve in Indonesia. Positive output gap that reflects excess demand condition gives rise to the inflation rates. In addition, the elasticity of output gap is higher if the mining sector is excluded from output gap estimation. In addition to inflation adaptation, the dynamics of exchange rate and international commodity price are also found to affect inflation significantly. The results are robust to the alternative measurement of output gapKeywords: Phillips curve, inflation, Indonesia, panel data
Procedia PDF Downloads 12233417 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation
Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves
Abstract:
Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP
Procedia PDF Downloads 9833416 Application of Acid Base Accounting to Predict Post-Mining Drainage Quality in Coalfields of the Main Karoo Basin and Selected Sub-Basins, South Africa
Authors: Lindani Ncube, Baojin Zhao, Ken Liu, Helen Johanna Van Niekerk
Abstract:
Acid Base Accounting (ABA) is a tool used to assess the total amount of acidity or alkalinity contained in a specific rock sample, and is based on the total S concentration and the carbonate content of a sample. A preliminary ABA test was conducted on 14 sandstone and 5 coal samples taken from coalfields representing the Main Karoo Basin (Highveld, Vryheid and Molteno/Indwe Coalfields) and the Sub-basins (Witbank and Waterberg Coalfields). The results indicate that sandstone and coal from the Main Karoo Basin have the potential of generating Acid Mine Drainage (AMD) as they contain sufficient pyrite to generate acid, with the final pH of samples relatively low upon complete oxidation of pyrite. Sandstone from collieries representing the Main Karoo Basin are characterised by elevated contents of reactive S%. All the studied samples were characterised by an Acid Potential (AP) that is less than the Neutralizing Potential (NP) except for two samples. The results further indicate that the sandstone from the Main Karoo Basin is prone to acid generation as compared to the sandstone from the Sub-basins. However, the coal has a relatively low potential of generating any acid. The application of ABA in this study contributes to an understanding of the complexities governing water-rock interactions. In general, the coalfields from the Main Karoo Basin have much higher potential to produce AMD during mining processes than the coalfields in the Sub-basins.Keywords: Main Karoo Basin, sub-basin, coal, sandstone, acid base accounting (ABA)
Procedia PDF Downloads 43333415 Comparative Study of Universities’ Web Structure Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
This paper is meant to analyze the ranking of University of Malaysia Terengganu, UMT’s website in the World Wide Web. There are only few researches have been done on comparing the ranking of universities’ websites so this research will be able to determine whether the existing UMT’s website is serving its purpose which is to introduce UMT to the world. The ranking is based on hub and authority values which are accordance to the structure of the website. These values are computed using two web-searching algorithms, HITS and SALSA. Three other universities’ websites are used as the benchmarks which are UM, Harvard and Stanford. The result is clearly showing that more work has to be done on the existing UMT’s website where important pages according to the benchmarks, do not exist in UMT’s pages. The ranking of UMT’s website will act as a guideline for the web-developer to develop a more efficient website.Keywords: algorithm, ranking, website, web structure mining
Procedia PDF Downloads 51733414 Highly Realistic Facial Expressions of Anthropomorphic Social Agent as a Factor in Solving the 'Uncanny Valley' Problem
Authors: Daniia Nigmatullina, Vlada Kugurakova, Maxim Talanov
Abstract:
We present a methodology and our plans of anthropomorphic social agent visualization. That includes creation of three-dimensional model of the virtual companion's head and its facial expressions. Talking Head is a cross-disciplinary project of developing of the human-machine interface with cognitive functions. During the creation of a realistic humanoid robot or a character, there might be the ‘uncanny valley’ problem. We think about this phenomenon and its possible causes. We are going to overcome the ‘uncanny valley’ by increasing of realism. This article discusses issues that should be considered when creating highly realistic characters (particularly the head), their facial expressions and speech visualization.Keywords: anthropomorphic social agent, facial animation, uncanny valley, visualization, 3D modeling
Procedia PDF Downloads 29033413 Reinforcement Learning for Self Driving Racing Car Games
Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh
Abstract:
This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.Keywords: reinforcement learning, soft actor-critic, deep q-network, self-driving cars, artificial intelligence, gaming
Procedia PDF Downloads 4633412 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 51733411 Preparation and Properties of Chloroacetated Natural Rubber Rubber Foam Using Corn Starch as Curing Agent
Authors: Ploenpit Boochathum, Pitchayanad Kaolim, Phimjutha Srisangkaew
Abstract:
In general, rubber foam is produced based on the sulfur curing system. However, the remaining sulfur in the rubber product waste is burned to sulfur dioxide gas causing the environment pollution. To avoid using sulfur as curing agent in the rubber foam products, this research work proposes non-sulfur curing system by using corn starch as a curing agent. The ether crosslinks were proposed to be produced via the functional bonding between hydroxyl groups of the starch molecules and chloroacetate groups added on the natural rubber molecules. The chloroacetated natural rubber (CNR) latex was prepared via the epoxidation reaction of the concentrated natural rubber latex, subsequently, epoxy rings were attacked by chloroacetic acid to produce hydroxyl groups and chloroacetate groups on the rubber molecules. Foaming agent namely NaHCO3 was selected to add in the CNR latex due to the low decomposition temperature at about 50°C. The appropriate curing temperature was assigned to be 90°C that is above gelatinization temperature; 60-70°C, of starch. The effect of weight ratio of starch, i.e., 0 phr, 3 phr and 5 phr, on the physical properties of CNR rubber foam was investigated. It was found that density reduced from 0.81 g/cm3 for 0 phr to 0.75 g/cm3 for 3 phr and 0.79 g/cm3 for 5 phr. The ability to return to its original thickness after prolonged compressive stresses of CNR rubber foam cured with starch loading of 5 phr was found to be considerably better than that of CNR rubber foam cured with starch 3 phr and CNR rubber foam without addition of starch according to the compression set that was determined to decrease from 66.67% to 40% and 26.67% with the increase loading of starch. The mechanical properties including tensile strength and modulus of CNR rubber foams cured using starch were determined to increase except that the elongation at break was found to decrease. In addition, all mechanical properties of CNR rubber foams cured with the starch 3 phr and 5 phr were found to be slightly different and drastically higher than those of CNR rubber foam without the addition of starch. This research work indicates that starch can be applicable as a curing agent for CNR rubber. This is confirmed by the increase of the elastic modulus (G') of CNR rubber foams that was cured with the starch over the CNR rubber foam without curing agent. This type of rubber foam is believed to be one of the biodegradable and environment-friendly product that can be cured at low temperature of 90°C.Keywords: chloroacetated natural rubber, corn starch, non-sulfur curing system, rubber foam
Procedia PDF Downloads 31833410 Distributed Perceptually Important Point Identification for Time Series Data Mining
Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung
Abstract:
In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining
Procedia PDF Downloads 43333409 Preparation and Evaluation of Calcium Fluorosilicate (CaSiF₆) as a Fluorinating Agent
Authors: Natsumi Murakami, Jae-Ho Kim, Susumu Yonezawa
Abstract:
The calcium fluorosilicate (CaSiF₆) was prepared from calcium silicate (CaSiO₃) with fluorine gas at 25 ~ 200 ℃ and 760 Torr for 1~24 h. Especially, the pure CaSiF₆ could be prepared at 25 ℃ for 24 h with F₂ gas from the results of X-ray diffraction. Increasing temperature to higher than 100 ℃, the prepared CaSiF₆ was decomposed into CaF₂ and SiF₄. The release of SiF₄ gas was confirmed by the results of gas-phase infrared spectroscopy. In this study, we tried to modify the surface of polycarbonate (PC) resin using the SiF₄ gas released from CaSiF₆ particles. By using the prepared CaSiF₆, the surface roughness of fluorinated PC samples was approximately four times larger than that (1.4 nm) of the untreated sample. The results of X-ray photoelectron spectroscopy indicated the formation of fluorinated bonds (e.g., -CFx) on the surface of PC after surface fluorination. Consequently, the CaSiF₆ particles can be useful for a new fluorinating agent.Keywords: calcium fluorosilicate, fluorinating agent, polycarbonate, surface fluorination
Procedia PDF Downloads 12333408 Feature Based Unsupervised Intrusion Detection
Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein
Abstract:
The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka
Procedia PDF Downloads 29633407 Microwave Assisted Foam-Mat Drying of Guava Pulp
Authors: Ovais S. Qadri, Abhaya K. Srivastava
Abstract:
Present experiments were carried to study the drying kinetics and quality of microwave foam-mat dried guava powder. Guava pulp was microwave foam mat dried using 8% egg albumin as foaming agent and then dried at microwave power 480W, 560W, 640W, 720W and 800W, foam thickness 3mm, 5mm and 7mm and inlet air temperature of 40˚C and 50˚C. Weight loss was used to estimate change in drying rate with respect to time. Powdered samples were analysed for various physicochemical quality parameters viz. acidity, pH, TSS, colour change and ascorbic acid content. Statistical analysis using three-way ANOVA revealed that sample of 5mm foam thickness dried at 800W and 50˚C was the best with 0.3584% total acid, 3.98 pH, 14min drying time, 8˚Brix TSS, 3.263 colour change and 154.762mg/100g ascorbic acid content.Keywords: foam mat drying, foam mat guava, guava powder, microwave drying
Procedia PDF Downloads 33233406 Enhance the Power of Sentiment Analysis
Authors: Yu Zhang, Pedro Desouza
Abstract:
Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modelling and testing work was done in R and Greenplum in-database analytic tools.Keywords: sentiment analysis, social media, Twitter, Amazon, data mining, machine learning, text mining
Procedia PDF Downloads 35333405 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011
Authors: S. Abera, T. Gidey, W. Terefe
Abstract:
Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.Keywords: data mining, HIV, testing, ethiopia
Procedia PDF Downloads 49633404 Characteristic Study of Polymer Sand as a Potential Substitute for Natural River Sand in Construction Industry
Authors: Abhishek Khupsare, Ajay Parmar, Ajay Agarwal, Swapnil Wanjari
Abstract:
The extreme demand for aggregate leads to the exploitation of river-bed for fine aggregates, affecting the environment adversely. Therefore, a suitable alternative to natural river sand is essentially required. This study focuses on preventing environmental impact by developing polymer sand to replace natural river sand (NRS). Development of polymer sand by mixing high volume fly ash, bottom ash, cement, natural river sand, and locally purchased high solid content polycarboxylate ether-based superplasticizer (HS-PCE). All the physical and chemical properties of polymer sand (P-Sand) were observed and satisfied the requirement of the Indian Standard code. P-Sand yields good specific gravity of 2.31 and is classified as zone-I sand with a satisfactory friction angle (37˚) compared to natural river sand (NRS) and Geopolymer fly ash sand (GFS). Though the water absorption (6.83%) and pH (12.18) are slightly more than those of GFS and NRS, the alkali silica reaction and soundness are well within the permissible limit as per Indian Standards. The chemical analysis by X-Ray fluorescence showed the presence of high amounts of SiO2 and Al2O3 with magnitudes of 58.879% 325 and 26.77%, respectively. Finally, the compressive strength of M-25 grade concrete using P-sand and Geopolymer sand (GFS) was observed to be 87.51% and 83.82% with respect to natural river sand (NRS) after 28 days, respectively. The results of this study indicate that P-sand can be a good alternative to NRS for construction work as it not only reduces the environmental effect due to sand mining but also focuses on utilising fly ash and bottom ash.Keywords: polymer sand, fly ash, bottom ash, HSPCE plasticizer, river sand mining
Procedia PDF Downloads 7733403 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 31633402 Analysis Mechanized Boring (TBM) of Tehran Subway Line 7
Authors: Shahin Shabani, Pouya Pourmadadi
Abstract:
Tunnel boring machines (TBMs) have been used for the construction of various tunnels for mining projects for the purpose of access, conveyance of ore and waste, drainage, exploration, water supply and water diversion. Several mining projects have seen the successful and economic beneficial use of TBMs, and there is an increasing awareness of the benefits of TBMs for mining projects. Key technical considerations for the use of TBMs for the construction of tunnels for mining projects include geological issues (rock type, rock alteration, rock strength, rock abrasivity, durability, ground water inflows), depth of cover and the potential for overstressing/rockbursts, site access and terrain, portal locations, TBM constraints, minimum tunnel size, tunnel support requirements, contractor and labor experience, and project schedule demands. This study focuses on tunnelling mining, with the goal to develop methods and tools to be used to gain understanding of these processes, and to analyze metro of Tehran. The Metro Line 7 of Tehran is one of the Longest (26 Km) and deepest (27m) of projects that’s under implementation. Because of major differences like passing under all geotechnical layers of the town and encountering part of it with underground water table and also using mechanized excavation system, is one of special metro projects.Keywords: TBM, tunnel boring machines economic, metro, line 7
Procedia PDF Downloads 38433401 Gamification Teacher Professional Development: Engaging Language Learners in STEMS through Game-Based Learning
Authors: Karen Guerrero
Abstract:
Kindergarten-12th grade teachers engaged in teacher professional development (PD) on game-based learning techniques and strategies to support teaching STEMSS (STEM + Social Studies with an emphasis on geography across the curriculum) to language learners. Ten effective strategies have supported teaching content and language in tandem. To provide exiting teacher PD on summer and spring breaks, gamification has integrated these strategies to engage linguistically diverse student populations to provide informal language practice while students engage in the content. Teachers brought a STEMSS lesson to the PD, engaged in a wide variety of games (dice, cards, board, physical, digital, etc.), critiqued the games based on gaming elements, then developed, brainstormed, presented, piloted, and published their game-based STEMSS lessons to share with their colleagues. Pre and post-surveys and focus groups were conducted to demonstrate an increase in knowledge, skills, and self-efficacy in using gamification to teach content in the classroom. Provide an engaging strategy (gamification) to support teaching content and language to linguistically diverse students in the K-12 classroom. Game-based learning supports informal language practice while developing academic vocabulary utilized in the game elements/content focus, building both content knowledge through play and language development through practice. The study also investigated teacher's increase in knowledge, skills, and self-efficacy in using games to teach language learners. Mixed methods were used to investigate knowledge, skills, and self-efficacy prior to and after the gamification teacher training (pre/post) and to understand the content and application of developing and utilizing game-based learning to teach. This study will contribute to the body of knowledge in applying game-based learning theories to the K-12 classroom to support English learners in developing English skills and STEMSS content knowledge.Keywords: gamification, teacher professional development, STEM, English learners, game-based learning
Procedia PDF Downloads 9133400 An Optimized Association Rule Mining Algorithm
Authors: Archana Singh, Jyoti Agarwal, Ajay Rana
Abstract:
Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph
Procedia PDF Downloads 42033399 Production, Quality Control and Biodistribution Assessment of 166 Ho-BPAMD as a New Bone Seeking Agent
Authors: H. Yousefnia, N. Amraee, M. Hosntalab, S. Zolghadri, A. Bahrami-Samani
Abstract:
The aim of this study was the preparation of a new agent for bone marrow ablation in patients with multiple myeloma. 166Ho was produced at Tehran research reactor via 165Ho(n,γ)166Ho reaction. Complexion of Ho‐166 with BPAMD was carried out by the addition of about 200µg of BPAMD in absolute water to 1 mci of 166HoCl3 and warming up the mixture 90 0C for 1 h. 166Ho-BPAMD was prepared successfully with radio chemical purity of 95% which was measured by ITLC method. The final solution was injected to wild-type mice and bio distribution was determined up to 48 h. SPECT images were acquired after 2 and 48 h post injection. Both the bio distribution studies and SPECT imaging indicated high bone uptake, while accumulation in other organs was approximately negligible. The results show that 166Ho-BPAMD has suitable characteristics and can be used as a new bone marrow ablative agent.Keywords: bone marrow ablation, BPAMD, 166Ho, SPECT
Procedia PDF Downloads 50633398 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm
Authors: Vahid Bayrami Rad
Abstract:
In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability
Procedia PDF Downloads 6633397 The Impact of Gold Mining on Disability: Experiences from the Obuasi Municipal Area
Authors: Mavis Yaa Konadu Agyemang
Abstract:
Despite provisions to uphold and safeguard the rights of persons with disability in Ghana, there is evidence that they still encounter several challenges which limit their full and effective involvement in mainstream society, including the gold mining sector. The study sought to explore how persons with physical disability (PWPDs) experience gold mining in the Obuasi Municipal Area. A qualitative research design was used to discover and understand the experiences of PWPDs regarding mining. The purposive sampling technique was used to select five key informants for the study with the age range of (24-52 years) while snowball sampling aided the selection of 16 persons with various forms of physical disability with the age range of (24-60 years). In-depth interviews were used to gather data. The interviews lasted from forty-five minutes to an hour. In relation to the setting, the interviews of thirteen (13) of the participants with disability were done in their houses, two (2) were done on the phone, and one (1) was done in the office. Whereas the interviews of the five (5) key informants were all done in their offices. Data were analyzed using Creswell’s (2009) concept of thematic analysis. The findings suggest that even though land degradation affected everyone in the area, persons with mobility and visual impairment experienced many difficulties trekking the undulating land for long distances in search of arable land. Also, although mining activities are mostly labour-intensive, PWPDs were not employed even in areas where they could work. Further, the cost of items, in general, was high, affecting PWPDs more due to their economic immobility and paying for other sources of water due to land degradation and water pollution. The study also discovered that the peculiar conditions of PWPDs were not factored into compensation payments, and neither were females with physical disability engaged in compensation negotiations. Also, although some of the infrastructure provided by the gold mining companies in the area was physically accessible to some extent, it was not accessible in terms of information delivery. There is a need to educate the public on the effects of mining on PWPDs, their needs as well as disability issues in general. The Minerals and Mining Act (703) should be amended to include provisions that would consider the peculiar needs of PWPDs in compensation payment.Keywords: mining, resettlement, compensation, environmental, social, disability
Procedia PDF Downloads 5533396 Analysis of Scholarly Communication Patterns in Korean Studies
Authors: Erin Hea-Jin Kim
Abstract:
This study aims to investigate scholarly communication patterns in Korean studies, which focuses on all aspects of Korea, including history, culture, literature, politics, society, economics, religion, and so on. It is called ‘national study or home study’ as the subject of the study is itself, whereas it is called ‘area study’ as the subject of the study is others, i.e., outside of Korea. Understanding of the structure of scholarly communication in Korean studies is important since the motivations, procedures, results, or outcomes of individual studies may be affected by the cooperative relationships that appear in the communication structure. To this end, we collected 1,798 articles with the (author or index) keyword ‘Korean’ published in 2018 from the Scopus database and extracted the institution and country of the authors using a text mining technique. A total of 96 countries, including South Korea, was identified. Then we constructed a co-authorship network based on the countries identified. The indicators of social network analysis (SNA), co-occurrences, and cluster analysis were used to measure the activity and connectivity of participation in collaboration in Korean studies. As a result, the highest frequency of collaboration appears in the following order: S. Korea with the United States (603), S. Korea with Japan (146), S. Korea with China (131), S. Korea with the United Kingdom (83), and China with the United States (65). This means that the most active participants are S. Korea as well as the USA. The highest rank in the role of mediator measured by betweenness centrality appears in the following order: United States (0.165), United Kingdom (0.045), China (0.043), Japan (0.037), Australia (0.026), and South Africa (0.023). These results show that these countries contribute to connecting in Korean studies. We found two major communities among the co-authorship network. Asian countries and America belong to the same community, and the United Kingdom and European countries belong to the other community. Korean studies have a long history, and the study has emerged since Japanese colonization. However, Korean studies have never been investigated by digital content analysis. The contributions of this study are an analysis of co-authorship in Korean studies with a global perspective based on digital content, which has not attempted so far to our knowledge, and to suggest ideas on how to analyze the humanities disciplines such as history, literature, or Korean studies by text mining. The limitation of this study is that the scholarly data we collected did not cover all domestic journals because we only gathered scholarly data from Scopus. There are thousands of domestic journals not indexed in Scopus that we can consider in terms of national studies, but are not possible to collect.Keywords: co-authorship network, Korean studies, Koreanology, scholarly communication
Procedia PDF Downloads 15833395 A Three Tier Secure KQML Interface with Novel Performatives
Authors: Dimple Juneja, Aarti Singh, Renu Hooda
Abstract:
Knowledge Query Manipulation Language (KQML) and FIPA ACL are two prime communication languages existing in multi agent systems (MAS). Both languages are more or less similar in terms of semantics (based on speech act theory) and offer cutting edge competition while establishing agent communication across Internet. In contrast to the fact that software agents operating on the internet are required to be more safeguarded from their counter-peer, both protocols lack security performatives. The paper proposes a three tier security interface with few novel security related performatives enhancing the basic architecture of KQML. The three levels are attestation, certification and trust establishment which enforces a tight security and hence reduces the security breeches.Keywords: multiagent systems, KQML, FIPA ACL, performatives
Procedia PDF Downloads 41133394 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 21233393 Using Mining Methods of WEKA to Predict Quran Verb Tense and Aspect in Translations from Arabic to English: Experimental Results and Analysis
Authors: Jawharah Alasmari
Abstract:
In verb inflection, tense marks past/present/future action, and aspect marks progressive/continues perfect/completed actions. This usage and meaning of tense and aspect differ in Arabic and English. In this research, we applied data mining methods to test the predictive function of candidate features by using our dataset of Arabic verbs in-context, and their 7 translations. Weka machine learning classifiers is used in this experiment in order to examine the key features that can be used to provide guidance to enable a translator’s appropriate English translation of the Arabic verb tense and aspect.Keywords: Arabic verb, English translations, mining methods, Weka software
Procedia PDF Downloads 27233392 Experimental Study of Moisture Effect on the Mechanical Behavior of Flax Fiber Reinforcement
Authors: Marwa Abida, Florian Gehring, Jamel Mars, Alexandre Vivet, Fakhreddine Dammak, Mohamed Haddar
Abstract:
The demand for bio-based materials in semi-structural and structural applications is constantly growing to conform to new environmental policies. Among them, Plant Fiber Reinforced Composites (PFRC) are attractive for the scientific community as well as the industrial world. Due to their relatively low densities and low environmental impact, vegetal fibers appear to be suitable as reinforcing materials for polymers. However, the major issue of plant fibers and PFRC in general is their hydrophilic behavior (high affinity to water molecules). Indeed, when absorbed, water causes fiber swelling and a loss of mechanical properties. Thus, the environmental loadings (moisture, temperature, UV) can strongly affect their mechanical properties and therefore play a critical role in the service life of PFRC. In order to analyze the influence of conditioning at relative humidity on the behavior of flax fiber reinforced composites, a preliminary study on flax fabrics has been conducted. The conditioning of the fabrics in different humid atmospheres made it possible to study the influence of the water content on the hygro-mechanical behavior of flax reinforcement through mechanical tensile tests. This work shows that increasing the relative humidity of the atmosphere induces an increase of the water content in the samples. It also brings up the significant influence of water content on the stiffness and elongation at break of the fabric, while no significant change of the breaking load is detected. Non-linear decrease of flax fabric rigidity and increase of its elongation at maximal force with the increase of water content are observed. It is concluded that water molecules act as a softening agent on flax fabrics. Two kinds of typical tensile curves are identified. Most of the tensile curves of samples show one unique linear region where the behavior appears to be linear prior to the first yarn failure. For some samples in which water content is between 2.7 % and 3.7 % (regardless the conditioning atmosphere), the emergence of a two-linear region behavior is pointed out. This phenomenon could be explained by local heterogeneities of water content which could induce premature local plasticity in some regions of the flax fabric sample behavior.Keywords: hygro-mechanical behavior, hygroscopy, flax fabric, relative humidity, mechanical properties
Procedia PDF Downloads 18833391 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling
Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang
Abstract:
Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model
Procedia PDF Downloads 145