Search results for: Data Mining
24973 Building Transparent Supply Chains through Digital Tracing
Authors: Penina Orenstein
Abstract:
In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.Keywords: data mining, supply chain, empirical research, data mapping
Procedia PDF Downloads 17424972 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 32324971 A Study of the Performance Parameter for Recommendation Algorithm Evaluation
Authors: C. Rana, S. K. Jain
Abstract:
The enormous amount of Web data has challenged its usage in efficient manner in the past few years. As such, a range of techniques are applied to tackle this problem; prominent among them is personalization and recommender system. In fact, these are the tools that assist user in finding relevant information of web. Most of the e-commerce websites are applying such tools in one way or the other. In the past decade, a large number of recommendation algorithms have been proposed to tackle such problems. However, there have not been much research in the evaluation criteria for these algorithms. As such, the traditional accuracy and classification metrics are still used for the evaluation purpose that provides a static view. This paper studies how the evolution of user preference over a period of time can be mapped in a recommender system using a new evaluation methodology that explicitly using time dimension. We have also presented different types of experimental set up that are generally used for recommender system evaluation. Furthermore, an overview of major accuracy metrics and metrics that go beyond the scope of accuracy as researched in the past few years is also discussed in detail.Keywords: collaborative filtering, data mining, evolutionary, clustering, algorithm, recommender systems
Procedia PDF Downloads 41124970 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 12724969 A New DIDS Design Based on a Combination Feature Selection Approach
Authors: Adel Sabry Eesa, Adnan Mohsin Abdulazeez Brifcani, Zeynep Orman
Abstract:
Feature selection has been used in many fields such as classification, data mining and object recognition and proven to be effective for removing irrelevant and redundant features from the original data set. In this paper, a new design of distributed intrusion detection system using a combination feature selection model based on bees and decision tree. Bees algorithm is used as the search strategy to find the optimal subset of features, whereas decision tree is used as a judgment for the selected features. Both the produced features and the generated rules are used by Decision Making Mobile Agent to decide whether there is an attack or not in the networks. Decision Making Mobile Agent will migrate through the networks, moving from node to another, if it found that there is an attack on one of the nodes, it then alerts the user through User Interface Agent or takes some action through Action Mobile Agent. The KDD Cup 99 data set is used to test the effectiveness of the proposed system. The results show that even if only four features are used, the proposed system gives a better performance when it is compared with the obtained results using all 41 features.Keywords: distributed intrusion detection system, mobile agent, feature selection, bees algorithm, decision tree
Procedia PDF Downloads 40724968 Remediation of Heavy Metal Contaminated Soil with Vivianite Nanoparticles
Authors: Shinen B., Bavor J., Dorjkhand B., Suvd B., Maitsetseg B.
Abstract:
A number of remediation techniques are available for the treatment of soils and sediments contaminated by heavy metals. However, some of these techniques are expensive and environmentally disruptive. Nanomaterials are used in the environment as environmental catalysts to convert toxic substances from water, soil, and sediment into environmentally benign compounds. This study was carried out to scrutinize the feasibility of vivianite nanoparticles for remediation of soils contaminated with heavy metals. Column experiments were performed in the laboratory to examine nanoparticle sequestration of metal in soil amended with vivianite nanoparticle suspension. The effect of environmental parameters such as temperature, pH and redox potential on metal leachability and bioavailability of soil amended with nanoparticle suspension was examined and compared with non-amended soils. The vivianite was effective in reducing the leachability of metals in soils. It is suggested that vivianite nanoparticles could be applied for the remediation of contaminated sites polluted by heavy metals due to mining activities, particularly in Mongolia, where mining industries have been developing rapidly in the last decade.Keywords: bioavailability, heavy metals, nanoparticles, remediation
Procedia PDF Downloads 18824967 Charting Sentiments with Naive Bayes and Logistic Regression
Authors: Jummalla Aashrith, N. L. Shiva Sai, K. Bhavya Sri
Abstract:
The swift progress of web technology has not only amassed a vast reservoir of internet data but also triggered a substantial surge in data generation. The internet has metamorphosed into one of the dynamic hubs for online education, idea dissemination, as well as opinion-sharing. Notably, the widely utilized social networking platform Twitter is experiencing considerable expansion, providing users with the ability to share viewpoints, participate in discussions spanning diverse communities, and broadcast messages on a global scale. The upswing in online engagement has sparked a significant curiosity in subjective analysis, particularly when it comes to Twitter data. This research is committed to delving into sentiment analysis, focusing specifically on the realm of Twitter. It aims to offer valuable insights into deciphering information within tweets, where opinions manifest in a highly unstructured and diverse manner, spanning a spectrum from positivity to negativity, occasionally punctuated by neutrality expressions. Within this document, we offer a comprehensive exploration and comparative assessment of modern approaches to opinion mining. Employing a range of machine learning algorithms such as Naive Bayes and Logistic Regression, our investigation plunges into the domain of Twitter data streams. We delve into overarching challenges and applications inherent in the realm of subjectivity analysis over Twitter.Keywords: machine learning, sentiment analysis, visualisation, python
Procedia PDF Downloads 5424966 Research on the Landscape of Xi'an Ancient City Based on the Poetry Text of Tang Dynasty
Authors: Zou Yihui
Abstract:
The integration of the traditional landscape of the ancient city and the poet's emotions and symbolization into ancient poetry is the unique cultural gene and spiritual core of the historical city, and re-understanding the historical landscape pattern from the poetry is conducive to continuing the historical city context and improving the current situation of the gradual decline of the poetry of the modern historical urban landscape. Starting from Tang poetry uses semantic analysis methods、combined with text mining technology, entry mining, word frequency analysis, and cluster analysis of the landscape information of Tang Chang'an City were carried out, and the method framework for analyzing the urban landscape form based on poetry text was constructed. Nearly 160 poems describing the landscape of Tang Chang'an City were screened, and the poetic landscape characteristics of Tang Chang'an City were sorted out locally in order to combine with modern urban spatial development to continue the urban spatial context.Keywords: Tang Chang'an City, poetic texts, semantic analysis, historical landscape
Procedia PDF Downloads 6124965 Delineating Concern Ground in Block Caving – Underground Mine Using Ground Penetrating Radar
Authors: Eric Sitorus, Septian Prahastudhi, Turgod Nainggolan, Erwin Riyanto
Abstract:
Mining by block or panel caving is a mining method that takes advantage of fractures within an ore body, coupled with gravity, to extract material from a predetermined column of ore. The caving column is weakened from beneath through the use of undercutting, after which the ore breaks up and is extracted from below in a continuous cycle. The nature of this method induces cyclical stresses on the pillars of excavations as stress is built up and released over time, which has a detrimental effect on both the installed ground support and the rock mass itself. Ground support capacity, especially on the production where excavation void ratio is highest, is subjected to heavy loading. Strain above threshold of the elongation of support capacity can yield resulting in damage to excavations. Geotechnical engineers must evaluate not only the remnant capacity of ground support systems but also investigate depth of rock mass yield within pillars, backs and floors. Ground Penetrating Radar (GPR) is a geophysical method that has the ability to evaluate rock mass damage using electromagnetic waves. This paper illustrates a case study from the Grasberg mining complex where non-invasive information on the depth of damage and condition of the remaining rock mass was required. GPR with 100 MHz antenna resolution was used to obtain images of the subsurface to determine rehabilitation requirements prior to recommencing production activities. The GPR surveys were used to calibrate the reflection coefficient response of varying rock mass conditions to known Rock Quality Designation (RQD) parameters observed at the mine. The calibrated GPR survey allowed site engineers to map subsurface conditions and plan rehabilitation accordingly.Keywords: block caving, ground penetrating radar, reflectivity, RQD
Procedia PDF Downloads 13324964 Cotton Crops Vegetative Indices Based Assessment Using Multispectral Images
Authors: Muhammad Shahzad Shifa, Amna Shifa, Muhammad Omar, Aamir Shahzad, Rahmat Ali Khan
Abstract:
Many applications of remote sensing to vegetation and crop response depend on spectral properties of individual leaves and plants. Vegetation indices are usually determined to estimate crop biophysical parameters like crop canopies and crop leaf area indices with the help of remote sensing. Cotton crops assessment is performed with the help of vegetative indices. Remotely sensed images from an optical multispectral radiometer MSR5 are used in this study. The interpretation is based on the fact that different materials reflect and absorb light differently at different wavelengths. Non-normalized and normalized forms of these datasets are analyzed using two complementary data mining algorithms; K-means and K-nearest neighbor (KNN). Our analysis shows that the use of normalized reflectance data and vegetative indices are suitable for an automated assessment and decision making.Keywords: cotton, condition assessment, KNN algorithm, clustering, MSR5, vegetation indices
Procedia PDF Downloads 33224963 Semantic Based Analysis in Complaint Management System with Analytics
Authors: Francis Alterado, Jennifer Enriquez
Abstract:
Semantic Based Analysis in Complaint Management System with Analytics is an enhanced tool of providing complaints by the clients as well as a mechanism for Palawan Polytechnic College to gather, process, and monitor status of these complaints. The study has a mobile application that serves as a remote facility of communication between the students and the school management on the issues encountered by the student and the solution of every complaint received. In processing the complaints, text mining and clustering algorithms were utilized. Every module of the systems was tested and based on the results; these are 100% free from error before integration was done. A system testing was also done by checking the expected functionality of the system which was 100% functional. The system was tested by 10 students by forwarding complaints to 10 departments. Based on results, the students were able to submit complaints, the system was able to process accordingly by identifying to which department the complaints are intended, and the concerned department was able to give feedback on the complaint received to the student. With this, the system gained 4.7 rating which means Excellent.Keywords: technology adoption, emerging technology, issues challenges, algorithm, text mining, mobile technology
Procedia PDF Downloads 19824962 Influence of Dynamic Loads in the Structural Integrity of Underground Rooms
Authors: M. Inmaculada Alvarez-Fernández, Celestino González-Nicieza, M. Belén Prendes-Gero, Fernando López-Gayarre
Abstract:
Among many factors affecting the stability of mining excavations, rock-bursts and tremors play a special role. These dynamic loads occur practically always and have different sources of generation. The most important of them is the commonly used mining technique, which disintegrates a certain area of the rock mass not only in the area of the planned mining, but also creates waves that significantly exceed this area affecting the structural elements. In this work it is analysed the consequences of dynamic loads over the structural elements in an underground room and pillar mine to avoid roof instabilities. With this end, dynamic loads were evaluated through in situ and laboratory tests and simulated with numerical modelling. Initially, the geotechnical characterization of all materials was carried out by mean of large-scale tests. Then, drill holes were done on the roof of the mine and were monitored to determine possible discontinuities in it. Three seismic stations and a triaxial accelerometer were employed to measure the vibrations from blasting tests, establish the dynamic behaviour of roof and pillars and develop the transmission laws. At last, computer simulations by FLAC3D software were done to check the effect of vibrations on the stability of the roofs. The study shows that in-situ tests have a greater reliability than laboratory samples because of eliminating the effect of heterogeneities, that the pillars work decreasing the amplitude of the vibration around them, and that the tensile strength of a beam and depending on its span is overcome with waves in phase and delayed. The obtained transmission law allows designing a blasting which guarantees safety and prevents the risk of future failures.Keywords: dynamic modelling, long term instability risks, room and pillar, seismic collapse
Procedia PDF Downloads 13824961 Multi-Criteria Inventory Classification Process Based on Logical Analysis of Data
Authors: Diana López-Soto, Soumaya Yacout, Francisco Ángel-Bello
Abstract:
Although inventories are considered as stocks of money sitting on shelve, they are needed in order to secure a constant and continuous production. Therefore, companies need to have control over the amount of inventory in order to find the balance between excessive and shortage of inventory. The classification of items according to certain criteria such as the price, the usage rate and the lead time before arrival allows any company to concentrate its investment in inventory according to certain ranking or priority of items. This makes the decision making process for inventory management easier and more justifiable. The purpose of this paper is to present a new approach for the classification of new items based on the already existing criteria. This approach is called the Logical Analysis of Data (LAD). It is used in this paper to assist the process of ABC items classification based on multiple criteria. LAD is a data mining technique based on Boolean theory that is used for pattern recognition. This technique has been tested in medicine, industry, credit risk analysis, and engineering with remarkable results. An application on ABC inventory classification is presented for the first time, and the results are compared with those obtained when using the well-known AHP technique and the ANN technique. The results show that LAD presented very good classification accuracy.Keywords: ABC multi-criteria inventory classification, inventory management, multi-class LAD model, multi-criteria classification
Procedia PDF Downloads 88024960 Comparison of Different Methods of Microorganism's Identification from a Copper Mining in Pará, Brazil
Authors: Louise H. Gracioso, Marcela P.G. Baltazar, Ingrid R. Avanzi, Bruno Karolski, Luciana J. Gimenes, Claudio O. Nascimento, Elen A. Perpetuo
Abstract:
Introduction: Higher copper concentrations promote a selection pressure on organisms such as plants, fungi and bacteria, which allows surviving only the resistant organisms to the contaminated site. This selective pressure keeps only the organisms most resistant to a specific condition and subsequently increases their bioremediation potential. Despite the bacteria importance for biosphere maintenance, it is estimated that only a small fraction living microbial species has been described and characterized. Due to the molecular biology development, tools based on analysis 16S ribosomal RNA or another specific gene are making a new scenario for the characterization studies and identification of microorganisms in the environment. News identification of microorganisms methods have also emerged like Biotyper (MALDI / TOF), this method mass spectrometry is subject to the recognition of spectroscopic patterns of conserved and features proteins for different microbial species. In view of this, this study aimed to isolate bacteria resistant to copper present in a Copper Processing Area (Sossego Mine, Canaan, PA) and identifies them in two different methods: Recent (spectrometry mass) and conventional. This work aimed to use them for a future bioremediation of this Mining. Material and Methods: Samples were collected at fifteen different sites of five periods of times. Microorganisms were isolated from mining wastes by culture enrichment technique; this procedure was repeated 4 times. The isolates were inoculated into MJS medium containing different concentrations of chloride copper (1mM, 2.5mM, 5mM, 7.5mM and 10 mM) and incubated in plates for 72 h at 28 ºC. These isolates were subjected to mass spectrometry identification methods (Biotyper – MALDI/TOF) and 16S gene sequencing. Results: A total of 105 strains were isolated in this area, bacterial identification by mass spectrometry method (MALDI/TOF) achieved 74% agreement with the conventional identification method (16S), 31% have been unsuccessful in MALDI-TOF and 2% did not obtain identification sequence the 16S. These results show that Biotyper can be a very useful tool in the identification of bacteria isolated from environmental samples, since it has a better value for money (cheap and simple sample preparation and MALDI plates are reusable). Furthermore, this technique is more rentable because it saves time and has a high performance (the mass spectra are compared to the database and it takes less than 2 minutes per sample).Keywords: copper mining area, bioremediation, microorganisms, identification, MALDI/TOF, RNA 16S
Procedia PDF Downloads 37424959 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering
Authors: Emiel Caron
Abstract:
Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics
Procedia PDF Downloads 19324958 Developing Sustainable Tourism Practices in Communities Adjacent to Mines: An Exploratory Study in South Africa
Authors: Felicite Ann Fairer-Wessels
Abstract:
There has always been a disparity between mining and tourism mainly due to the socio-economic and environmental impacts of mines on both the adjacent resident communities and the areas taken up by the mining operation. Although heritage mining tourism has been actively and successfully pursued and developed in the UK, largely Wales, and Scandinavian countries, the debate whether active mining and tourism can have a mutually beneficial relationship remains imminent. This pilot study explores the relationship between the ‘to be developed’ future Nokeng Mine and its adjacent community, the rural community of Moloto, will be investigated in terms of whether sustainable tourism and livelihood activities can potentially be developed with the support of the mine. Concepts such as social entrepreneur, corporate social responsibility, sustainable development and triple bottom line are discussed. Within the South African context as a mineral rich developing country, the government has a statutory obligation to empower disenfranchised communities through social and labour plans and policies. All South African mines must preside over a Social and Labour Plan according to the Mineral and Petroleum Resources Development Act, No 28 of 2002. The ‘social’ component refers to the ‘social upliftment’ of communities within or adjacent to any mine; whereas the ‘labour’ component refers to the mine workers sourced from the specific community. A qualitative methodology is followed using the case study as research instrument for the Nokeng Mine and Moloto community with interviews and focus group discussions. The target population comprised of the Moloto Tribal Council members (8 in-depth interviews), the Moloto community members (17: focus groups); and the Nokeng Mine representatives (4 in-depth interviews). In this pilot study two disparate ‘worlds’ are potentially linked: on the one hand, the mine as social entrepreneur that is searching for feasible and sustainable ideas; and on the other hand, the community adjacent to the mine, with potentially sustainable tourism entrepreneurs that can tap into the resources of the mine should their ideas be feasible to build their businesses. Being an exploratory study the findings are limited but indicate that the possible success of tourism and sustainable livelihood activities lies in the fact that both the Mine and Community are keen to work together – the mine in terms of obtaining labour and profit; and the community in terms of improved and sustainable social and economic conditions; with both parties realizing the importance to mitigate negative environmental impacts. In conclusion, a relationship of trust is imperative between a mine and a community before a long term liaison is possible. However whether tourism is a viable solution for the community to engage in is debatable. The community could initially rather pursue the sustainable livelihoods approach and focus on life-supporting activities such as building, gardening, etc. that once established could feed into possible sustainable tourism activities.Keywords: community development, mining tourism, sustainability, South Africa
Procedia PDF Downloads 30124957 Artificial Intelligence as a User of Copyrighted Work: Descriptive Study
Authors: Dominika Collett
Abstract:
AI applications, such as machine learning, require access to a vast amount of data in the training phase, which can often be the subject of copyright protection. During later usage, the various content with which the application works can be recorded or made available on the basis of which it produces the resulting output. The EU has recently adopted new legislation to secure machine access to protected works under the DSM Directive; but, the issue of machine use of copyright works is not clearly addressed. However, such clarity is needed regarding the increasing importance of AI and its development. Therefore, this paper provides a basic background of the technology used in the development of applications in the field of computer creativity. The second part of the paper then will focus on a legal analysis of machine use of the authors' works from the perspective of existing European and Czech legislation. The main results of the paper discuss the potential collision of existing legislation in regards to machine use of works with special focus on exceptions and limitations. The legal regulation of machine use of copyright work will impact the development of AI technology.Keywords: copyright, artificial intelligence, legal use, infringement, Czech law, EU law, text and data mining
Procedia PDF Downloads 12224956 Educational Leadership and Artificial Intelligence
Authors: Sultan Ghaleb Aldaihani
Abstract:
- The environment in which educational leadership takes place is becoming increasingly complex due to factors like globalization and rapid technological change. - This is creating a "leadership gap" where the complexity of the environment outpaces the ability of leaders to effectively respond. - Educational leadership involves guiding teachers and the broader school system towards improved student learning and achievement. 2. Implications of Artificial Intelligence (AI) in Educational Leadership: - AI has great potential to enhance education, such as through intelligent tutoring systems and automating routine tasks to free up teachers. - AI can also have significant implications for educational leadership by providing better information and data-driven decision-making capabilities. - Computer-adaptive testing can provide detailed, individualized data on student learning that leaders can use for instructional decisions and accountability. 3. Enhancing Decision-Making Processes: - Statistical models and data mining techniques can help identify at-risk students earlier, allowing for targeted interventions. - Probability-based models can diagnose students likely to drop out, enabling proactive support. - These data-driven approaches can make resource allocation and decision-making more effective. 4. Improving Efficiency and Productivity: - AI systems can automate tasks and change processes to improve the efficiency of educational leadership and administration. - Integrating AI can free up leaders to focus more on their role's human, interactive elements.Keywords: Education, Leadership, Technology, Artificial Intelligence
Procedia PDF Downloads 4024955 Assessment of Chromium Concentration and Human Health Risk in the Steelpoort River Sub-Catchment of the Olifants River Basin, South Africa
Authors: Abraham Addo-Bediako
Abstract:
Many freshwater ecosystems are facing immense pressure from anthropogenic activities, such as agricultural, industrial and mining. Trace metal pollution in freshwater ecosystems has become an issue of public health concern due to its toxicity and persistence in the environment. Trace elements pose a serious risk not only to the environment and aquatic biota but also humans. Chromium is one of such trace elements and its pollution in surface waters and groundwaters represents a serious environmental problem. In South Africa, agriculture, mining, industrial and domestic wastes are the main contributors to chromium discharge in rivers. The common forms of chromium are chromium (III) and chromium (VI). The latter is the most toxic because it can cause damage to human health. The aim of the study was to assess the contamination of chromium in the water and sediments of two rivers in the Steelpoort River sub-catchment of the Olifants River Basin, South Africa and human health risk. The concentration of Cr was analyzed using inductively coupled plasma–optical emission spectrometry (ICP-OES). The concentration of the metal was found to exceed the threshold limit, mainly in areas of high human activities. The hazard quotient through ingestion exposure did not exceed the threshold limit of 1 for adults and children and cancer risk for adults and children computed did not exceed the threshold limit of 10-4. Thus, there is no potential health risk from chromium through ingestion of drinking water for now. However, with increasing human activities, especially mining, the concentration could increase and become harmful to humans who depend on rivers for drinking water. It is recommended that proper management strategies should be taken to minimize the impact of chromium on the rivers and water from the rivers should properly be treated before domestic use.Keywords: land use, health risk, metal pollution, water quality
Procedia PDF Downloads 8624954 Three-Stage Mining Metals Supply Chain Coordination and Product Quality Improvement with Revenue Sharing Contract
Authors: Hamed Homaei, Iraj Mahdavi, Ali Tajdin
Abstract:
One of the main concerns of miners is to increase the quality level of their products because the mining metals price depends on their quality level; however, increasing the quality level of these products has different costs at different levels of the supply chain. These costs usually increase after extractor level. This paper studies the coordination issue of a decentralized three-level supply chain with one supplier (extractor), one mineral processor and one manufacturer in which the increasing product quality level cost at the processor level is higher than the supplier and at the level of the manufacturer is more than the processor. We identify the optimal product quality level for each supply chain member by designing a revenue sharing contract. Finally, numerical examples show that the designed contract not only increases the final product quality level but also provides a win-win condition for all supply chain members and increases the whole supply chain profit.Keywords: three-stage supply chain, product quality improvement, channel coordination, revenue sharing
Procedia PDF Downloads 18224953 Evaluation of the Urban Regeneration Project: Land Use Transformation and SNS Big Data Analysis
Authors: Ju-Young Kim, Tae-Heon Moon, Jung-Hun Cho
Abstract:
Urban regeneration projects have been actively promoted in Korea. In particular, Jeonju Hanok Village is evaluated as one of representative cases in terms of utilizing local cultural heritage sits in the urban regeneration project. However, recently, there has been a growing concern in this area, due to the ‘gentrification’, caused by the excessive commercialization and surging tourists. This trend was changing land and building use and resulted in the loss of identity of the region. In this regard, this study analyzed the land use transformation between 2010 and 2016 to identify the commercialization trend in Jeonju Hanok Village. In addition, it conducted SNS big data analysis on Jeonju Hanok Village from February 14th, 2016 to March 31st, 2016 to identify visitors’ awareness of the village. The study results demonstrate that rapid commercialization was underway, unlikely the initial intention, so that planners and officials in city government should reconsider the project direction and rebuild deliberate management strategies. This study is meaningful in that it analyzed the land use transformation and SNS big data to identify the current situation in urban regeneration area. Furthermore, it is expected that the study results will contribute to the vitalization of regeneration area.Keywords: land use, SNS, text mining, urban regeneration
Procedia PDF Downloads 29124952 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.Keywords: cancer classification, feature selection, deep learning, genetic algorithm
Procedia PDF Downloads 11024951 Automatic Lexicon Generation for Domain Specific Dataset for Mining Public Opinion on China Pakistan Economic Corridor
Authors: Tayyaba Azim, Bibi Amina
Abstract:
The increase in the popularity of opinion mining with the rapid growth in the availability of social networks has attracted a lot of opportunities for research in the various domains of Sentiment Analysis and Natural Language Processing (NLP) using Artificial Intelligence approaches. The latest trend allows the public to actively use the internet for analyzing an individual’s opinion and explore the effectiveness of published facts. The main theme of this research is to account the public opinion on the most crucial and extensively discussed development projects, China Pakistan Economic Corridor (CPEC), considered as a game changer due to its promise of bringing economic prosperity to the region. So far, to the best of our knowledge, the theme of CPEC has not been analyzed for sentiment determination through the ML approach. This research aims to demonstrate the use of ML approaches to spontaneously analyze the public sentiment on Twitter tweets particularly about CPEC. Support Vector Machine SVM is used for classification task classifying tweets into positive, negative and neutral classes. Word2vec and TF-IDF features are used with the SVM model, a comparison of the trained model on manually labelled tweets and automatically generated lexicon is performed. The contributions of this work are: Development of a sentiment analysis system for public tweets on CPEC subject, construction of an automatic generation of the lexicon of public tweets on CPEC, different themes are identified among tweets and sentiments are assigned to each theme. It is worth noting that the applications of web mining that empower e-democracy by improving political transparency and public participation in decision making via social media have not been explored and practised in Pakistan region on CPEC yet.Keywords: machine learning, natural language processing, sentiment analysis, support vector machine, Word2vec
Procedia PDF Downloads 14824950 Patterns in Fish Diversity and Abundance of an Abandoned Gold Mine Reservoirs
Authors: O. E. Obayemi, M. A. Ayoade, O. O. Komolafe
Abstract:
Fish survey was carried out for an annual cycle covering both rainy and dry seasons using cast nets, gill nets and traps at two different reservoirs. The objective was to examined the fish assemblages of the reservoirs and provide more additional information on the reservoir. The fish species in the reservoirs comprised of twelve species of six families. The results of the study also showed that five species of fish were caught in reservoir five while ten fish species were captured in reservoir six. Species such as Malapterurus electricus, Ctenopoma kingsleyae, Mormyrus rume, Parachanna obscura, Sarotherodon galilaeus, Tilapia mariae, C. guntheri, Clarias macromystax, Coptodon zilii and Clarias gariepinus were caught during the sampling period. There was a significant difference (p=0.014, t = 1.711) in the abundance of fish species in the two reservoirs. Seasonally, reservoirs five (p=0.221, t = 1.859) and six (p=0.453, t = 1.734) showed there was no significant difference in their fish populations. Also, despite being impacted with gold mining the diversity indices were high when compared to less disturbed waterbodies. The study concluded that the environments recorded low abundant fish species which suggests the influence of mining on the abundance and diversity of fish species.Keywords: Igun, fish, Shannon-Wiener Index, Simpson index, Pielou index
Procedia PDF Downloads 10524949 The Structure and Function Investigation and Analysis of the Automatic Spin Regulator (ASR) in the Powertrain System of Construction and Mining Machines with the Focus on Dump Trucks
Authors: Amir Mirzaei
Abstract:
The powertrain system is one of the most basic and essential components in a machine. The occurrence of motion is practically impossible without the presence of this system. When power is generated by the engine, it is transmitted by the powertrain system to the wheels, which are the last parts of the system. Powertrain system has different components according to the type of use and design. When the force generated by the engine reaches to the wheels, the amount of frictional force between the tire and the ground determines the amount of traction and non-slip or the amount of slip. At various levels, such as icy, muddy, and snow-covered ground, the amount of friction coefficient between the tire and the ground decreases dramatically and considerably, which in turn increases the amount of force loss and the vehicle traction decreases drastically. This condition is caused by the phenomenon of slipping, which, in addition to the waste of energy produced, causes the premature wear of driving tires. It also causes the temperature of the transmission oil to rise too much, as a result, causes a reduction in the quality and become dirty to oil and also reduces the useful life of the clutches disk and plates inside the transmission. this issue is much more important in road construction and mining machinery than passenger vehicles and is always one of the most important and significant issues in the design discussion, in order to overcome. One of these methods is the automatic spin regulator system which is abbreviated as ASR. The importance of this method and its structure and function have solved one of the biggest challenges of the powertrain system in the field of construction and mining machinery. That this research is examined.Keywords: automatic spin regulator, ASR, methods of reducing slipping, methods of preventing the reduction of the useful life of clutches disk and plate, methods of preventing the premature dirtiness of transmission oil, method of preventing the reduction of the useful life of tires
Procedia PDF Downloads 7824948 Geochemical Baseline and Origin of Trace Elements in Soils and Sediments around Selibe-Phikwe Cu-Ni Mining Town, Botswana
Authors: Fiona S. Motswaiso, Kengo Nakamura, Takeshi Komai
Abstract:
Heavy metals may occur naturally in rocks and soils, but elevated quantities of them are being gradually released into the environment by anthropogenic activities such as mining. In order to address issues of heavy metal water and soil pollution, a distinction needs to be made between natural and anthropogenic anomalies. The current study aims at characterizing the spatial distribution of trace elements and evaluate site-specific geochemical background concentrations of trace elements in the mine soils examined, and also to discriminate between lithogenic and anthropogenic sources of enrichment around a copper-nickel mining town in Selibe-Phikwe, Botswana. A total of 20 Soil samples, 11 river sediment, and 9 river water samples were collected from an area of 625m² within the precincts of the mine and the smelter. The concentrations of metals (Cu, Ni, Pb, Zn, Cr, Ni, Mn, As, Pb, and Co) were determined by using an ICP-MS after digestion with aqua regia. Major elements were also determined using ED-XRF. Water pH and EC were measured on site and recorded while soil pH and EC were also determined in the laboratory after performing water elution tests. The highest Cu and Ni concentrations in soil are 593mg/kg and 453mg/kg respectively, which is 3 times higher than the crustal composition values and 2 times higher than the South African minimum allowable levels of heavy metals in soils. The level of copper contamination was higher than that of nickel and other contaminants. Water pH levels ranged from basic (9) to very acidic (3) in areas closer to the mine/smelter. There is high variation in heavy metal concentration, eg. Cu suggesting that some sites depict regional natural background concentrations while other depict anthropogenic sources.Keywords: contamination, geochemical baseline, heavy metals, soils
Procedia PDF Downloads 15824947 Development of Innovative Islamic Web Applications
Authors: Farrukh Shahzad
Abstract:
The rich Islamic resources related to religious text, Islamic sciences, and history are widely available in print and in electronic format online. However, most of these works are only available in Arabic language. In this research, an attempt is made to utilize these resources to create interactive web applications in Arabic, English and other languages. The system utilizes the Pattern Recognition, Knowledge Management, Data Mining, Information Retrieval and Management, Indexing, storage and data-analysis techniques to parse, store, convert and manage the information from authentic Arabic resources. These interactive web Apps provide smart multi-lingual search, tree based search, on-demand information matching and linking. In this paper, we provide details of application architecture, design, implementation and technologies employed. We also presented the summary of web applications already developed. We have also included some screen shots from the corresponding web sites. These web applications provide an Innovative On-line Learning Systems (eLearning and computer based education).Keywords: Islamic resources, Muslim scholars, hadith, narrators, history, fiqh
Procedia PDF Downloads 28124946 Risk Assessment of Trace Metals in the Soil Surface of an Abandoned Mine, El-Abed Northwestern Algeria
Authors: Farida Mellah, Abdelhak Boutaleb, Bachir Henni, Dalila Berdous, Abdelhamid Mellah
Abstract:
Context/Purpose: One of the largest mining operations for lead and zinc deposits in northwestern Algeria in more than thirty years, El Abed is now the abandoned mine that has been inactive since 2004, leaving large amounts of accumulated mining waste under the influence of Wind, erosion, rain, and near agricultural lands. Materials & Methods: This study aims to verify the concentrations and sources of heavy metals for surface samples containing randomly taken soil. Chemical analyses were performed using iCAP 7000 Series ICP-optical emission spectrometer, using a set of environmental quality indicators by calculating the enrichment factor using iron and aluminum references, geographic accumulation index and geographic information system (GIS). On the basis of the spatial distribution. Results: The results indicated that the average metal concentration was: (As = 30,82),(Pb = 1219,27), (Zn = 2855,94), (Cu = 5,3), mg/Kg,based on these results, all metals except Cu passed by GBV in the Earth's crust. Environmental quality indicators were calculated based on the concentrations of trace metals such as lead, arsenic, zinc, copper, iron and aluminum. Interpretation: This study investigated the concentrations and sources of trace metals, and by using quality indicators and statistical methods, lead, zinc, and arsenic were determined from human sources, while copper was a natural source. And based on the spatial analysis on the basis of GIS, many hot spots were identified in the El-Abed region. Conclusion: These results could help in the development of future treatment strategies aimed primarily at eliminating materials from mining waste.Keywords: soil contamination, trace metals, geochemical indices, El Abed mine, Algeria
Procedia PDF Downloads 6924945 Applications of Big Data in Education
Authors: Faisal Kalota
Abstract:
Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.Keywords: big data, learning analytics, analytics, big data in education, Hadoop
Procedia PDF Downloads 42324944 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support
Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz
Abstract:
The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.
Procedia PDF Downloads 126