Search results for: mining big data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24980

Search results for: mining big data

24320 Optimized Approach for Secure Data Sharing in Distributed Database

Authors: Ahmed Mateen, Zhu Qingsheng, Ahmad Bilal

Abstract:

In the current age of technology, information is the most precious asset of a company. Today, companies have a large amount of data. As the data become larger, access to data for some particular information is becoming slower day by day. Faster data processing to shape it in the form of information is the biggest issue. The major problems in distributed databases are the efficiency of data distribution and response time of data distribution. The security of data distribution is also a big issue. For these problems, we proposed a strategy that can maximize the efficiency of data distribution and also increase its response time. This technique gives better results for secure data distribution from multiple heterogeneous sources. The newly proposed technique facilitates the companies for secure data sharing efficiently and quickly.

Keywords: ER-schema, electronic record, P2P framework, API, query formulation

Procedia PDF Downloads 322
24319 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation

Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen

Abstract:

Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.

Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling

Procedia PDF Downloads 81
24318 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 162
24317 Predicting Success and Failure in Drug Development Using Text Analysis

Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev

Abstract:

Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.

Keywords: data analysis, drug development, sentiment analysis, text-mining

Procedia PDF Downloads 145
24316 Models of State Organization and Influence over Collective Identity and Nationalism in Spain

Authors: Muñoz-Sanchez, Victor Manuel, Perez-Flores, Antonio Manuel

Abstract:

The main objective of this paper is to establish the relationship between models of state organization and the various types of collective identity expressed by the Spanish. The question of nationalism and identity ascription in Spain has always been a topic of special importance due to the presence in that country of territories where the population emits very different opinions of nationalist sentiment than the rest of Spain. The current situation of sovereignty challenge of Catalonia to the central government exemplifies the importance of the subject matter. In order to analyze this process of interrelation, we use a secondary data mining by applying the multiple correspondence analysis technique (MCA). As a main result a typology of four types of expression of collective identity based on models of State organization are shown, which are connected with the party position on this issue.

Keywords: models of organization of the state, nationalism, collective identity, Spain, political parties

Procedia PDF Downloads 430
24315 A Study on the Nostalgia Contents Analysis of Hometown Alumni in the Online Community

Authors: Heejin Yun, Juanjuan Zang

Abstract:

This study aims to analyze the text terms posted on an online community of people from the same hometown and to understand the topic and trend of nostalgia composed online. For this purpose, this study collected 144 writings which the natives of Yeongjong Island, Incheon, South-Korea have posted on an online community. And it analyzed association relations. As a result, online community texts means that just defining nostalgia as ‘a mind longing for hometown’ is not an enough explanation. Second, texts composed online have abstractness rather than persons’ individual stories. This study figured out the relationship that had the most critical and closest mutual association among the terms that constituted nostalgia through literature research and association rule concerning nostalgia. The result of this study has a characteristic that it summed up the core terms and emotions related to nostalgia.

Keywords: nostalgia, cultural memory, data mining, association rule

Procedia PDF Downloads 222
24314 GIS-Based Spatial Distribution and Evaluation of Selected Heavy Metals Contamination in Topsoil around Ecton Mining Area, Derbyshire, UK

Authors: Zahid O. Alibrahim, Craig D. Williams, Clive L. Roberts

Abstract:

The study area (Ecton mining area) is located in the southern part of the Peak District in Derbyshire, England. It is bounded by the River Manifold from the west. This area has been mined for a long period. As a result, huge amounts of potentially toxic metals were released into the surrounding area and are most likely to be a significant source of heavy metal contamination to the local soil, water and vegetation. In order to appraise the potential heavy metal pollution in this area, 37 topsoil samples (5-20 cm depth) were collected and analysed for their total content of Cu, Pb, Zn, Mn, Cr, Ni and V using ICP (Inductively Coupled Plasma) optical emission spectroscopy. Multivariate Geospatial analyses using the GIS technique were utilised to draw geochemical maps of the metals of interest over the study area. A few hotspot points, areas of elevated concentrations of metals, were specified, which are presumed to be the results of anthropogenic activities. In addition, the soil’s environmental quality was evaluated by calculating the Mullers’ Geoaccumulation index (I geo), which suggests that the degree of contamination of the investigated heavy metals has the following trend: Pb > Zn > Cu > Mn > Ni = Cr = V. Furthermore, the potential ecological risk, using the enrichment factor (EF), was also specified. On the basis of the calculated amount or the EF, the levels of pollution for the studied metals in the study area have the following order: Pb>Zn>Cu>Cr>V>Ni>Mn.

Keywords: enrichment factor, geoaccumulation index, GIS, heavy metals, multivariate analysis

Procedia PDF Downloads 348
24313 Application of Acid Base Accounting to Predict Post-Mining Drainage Quality in Coalfields of the Main Karoo Basin and Selected Sub-Basins, South Africa

Authors: Lindani Ncube, Baojin Zhao, Ken Liu, Helen Johanna Van Niekerk

Abstract:

Acid Base Accounting (ABA) is a tool used to assess the total amount of acidity or alkalinity contained in a specific rock sample, and is based on the total S concentration and the carbonate content of a sample. A preliminary ABA test was conducted on 14 sandstone and 5 coal samples taken from coalfields representing the Main Karoo Basin (Highveld, Vryheid and Molteno/Indwe Coalfields) and the Sub-basins (Witbank and Waterberg Coalfields). The results indicate that sandstone and coal from the Main Karoo Basin have the potential of generating Acid Mine Drainage (AMD) as they contain sufficient pyrite to generate acid, with the final pH of samples relatively low upon complete oxidation of pyrite. Sandstone from collieries representing the Main Karoo Basin are characterised by elevated contents of reactive S%. All the studied samples were characterised by an Acid Potential (AP) that is less than the Neutralizing Potential (NP) except for two samples. The results further indicate that the sandstone from the Main Karoo Basin is prone to acid generation as compared to the sandstone from the Sub-basins. However, the coal has a relatively low potential of generating any acid. The application of ABA in this study contributes to an understanding of the complexities governing water-rock interactions. In general, the coalfields from the Main Karoo Basin have much higher potential to produce AMD during mining processes than the coalfields in the Sub-basins.

Keywords: Main Karoo Basin, sub-basin, coal, sandstone, acid base accounting (ABA)

Procedia PDF Downloads 425
24312 Application of Blockchain Technology in Geological Field

Authors: Mengdi Zhang, Zhenji Gao, Ning Kang, Rongmei Liu

Abstract:

Management and application of geological big data is an important part of China's national big data strategy. With the implementation of a national big data strategy, geological big data management becomes more and more critical. At present, there are still a lot of technology barriers as well as cognition chaos in many aspects of geological big data management and application, such as data sharing, intellectual property protection, and application technology. Therefore, it’s a key task to make better use of new technologies for deeper delving and wider application of geological big data. In this paper, we briefly introduce the basic principle of blockchain technology at the beginning and then make an analysis of the application dilemma of geological data. Based on the current analysis, we bring forward some feasible patterns and scenarios for the blockchain application in geological big data and put forward serval suggestions for future work in geological big data management.

Keywords: blockchain, intellectual property protection, geological data, big data management

Procedia PDF Downloads 75
24311 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 47
24310 Embodying the Ecological Validity in Creating the Sustainable Public Policy: A Study in Strengthening the Green Economy in Indonesia

Authors: Gatot Dwi Hendro, Hayyan ul Haq

Abstract:

This work aims to explore the strategy in embodying the ecological validity in creating the sustainability of public policy, particularly in strengthening the green economy in Indonesia. This green economy plays an important role in supporting the national development in Indonesia, as it is a part of the national policy that posits the primary priority in Indonesian governance. The green economy refers to the national development covering strategic natural resources, such as mining, gold, oil, coal, forest, water, marine, and the other supporting infrastructure for products and distribution, such as fabrics, roads, bridges, and so forth. Thus, all activities in those national development should consider the sustainability. This sustainability requires the strong commitment of the national and regional government, as well as the local governments to put the ecology as the main requirement for issuing any policy, such as licence in mining production, and developing and building new production and supporting infrastructures for optimising the national resources. For that reason this work will focus on the strategy how to embody the ecological values and norms in the public policy. In detail, this work will offer the method, i.e. legal techniques, in visualising and embodying the norms and public policy that valid ecologically. This ecological validity is required in order to maintain and sustain our collective life.

Keywords: ecological validity, sustainable development, coherence, Indonesian Pancasila values, environment, marine

Procedia PDF Downloads 475
24309 Using Rainfall Simulators to Design and Assess the Post-Mining Erosional Stability

Authors: Ashraf M. Khalifa, Hwat Bing So, Greg Maddocks

Abstract:

Changes to the mining environmental approvals process in Queensland have been rolled out under the MERFP Act (2018). This includes requirements for a Progressive Rehabilitation and Closure Plan (PRC Plan). Key considerations of the landform design report within the PRC Plan must include: (i) identification of materials available for landform rehabilitation, including their ability to achieve the required landform design outcomes, (ii) erosion assessments to determine landform heights, gradients, profiles, and material placement, (iii) slope profile design considering the interactions between soil erodibility, rainfall erosivity, landform height, gradient, and vegetation cover to identify acceptable erosion rates over a long-term average, (iv) an analysis of future stability based on the factors described above e.g., erosion and /or landform evolution modelling. ACARP funded an extensive and thorough erosion assessment program using rainfall simulators from 1998 to 2010. The ACARP program included laboratory assessment of 35 soil and spoil samples from 16 coal mines and samples from a gold mine in Queensland using 3 x 0.8 m laboratory rainfall simulator. The reliability of the laboratory rainfall simulator was verified through field measurements using larger flumes 20 x 5 meters and catchment scale measurements at three sites (3 different catchments, average area of 2.5 ha each). Soil cover systems are a primary component of a constructed mine landform. The primary functions of a soil cover system are to sustain vegetation and limit the infiltration of water and oxygen into underlying reactive mine waste. If the external surface of the landform erodes, the functions of the cover system cannot be maintained, and the cover system will most likely fail. Assessing a constructed landform’s potential ‘long-term’ erosion stability requires defensible erosion rate thresholds below which rehabilitation landform designs are considered acceptably erosion-resistant or ‘stable’. The process used to quantify erosion rates using rainfall simulators (flumes) to measure rill and inter-rill erosion on bulk samples under laboratory conditions or on in-situ material under field conditions will be explained.

Keywords: open-cut, mining, erosion, rainfall simulator

Procedia PDF Downloads 96
24308 Characteristic Study of Polymer Sand as a Potential Substitute for Natural River Sand in Construction Industry

Authors: Abhishek Khupsare, Ajay Parmar, Ajay Agarwal, Swapnil Wanjari

Abstract:

The extreme demand for aggregate leads to the exploitation of river-bed for fine aggregates, affecting the environment adversely. Therefore, a suitable alternative to natural river sand is essentially required. This study focuses on preventing environmental impact by developing polymer sand to replace natural river sand (NRS). Development of polymer sand by mixing high volume fly ash, bottom ash, cement, natural river sand, and locally purchased high solid content polycarboxylate ether-based superplasticizer (HS-PCE). All the physical and chemical properties of polymer sand (P-Sand) were observed and satisfied the requirement of the Indian Standard code. P-Sand yields good specific gravity of 2.31 and is classified as zone-I sand with a satisfactory friction angle (37˚) compared to natural river sand (NRS) and Geopolymer fly ash sand (GFS). Though the water absorption (6.83%) and pH (12.18) are slightly more than those of GFS and NRS, the alkali silica reaction and soundness are well within the permissible limit as per Indian Standards. The chemical analysis by X-Ray fluorescence showed the presence of high amounts of SiO2 and Al2O3 with magnitudes of 58.879% 325 and 26.77%, respectively. Finally, the compressive strength of M-25 grade concrete using P-sand and Geopolymer sand (GFS) was observed to be 87.51% and 83.82% with respect to natural river sand (NRS) after 28 days, respectively. The results of this study indicate that P-sand can be a good alternative to NRS for construction work as it not only reduces the environmental effect due to sand mining but also focuses on utilising fly ash and bottom ash.

Keywords: polymer sand, fly ash, bottom ash, HSPCE plasticizer, river sand mining

Procedia PDF Downloads 66
24307 To Handle Data-Driven Software Development Projects Effectively

Authors: Shahnewaz Khan

Abstract:

Machine learning (ML) techniques are often used in projects for creating data-driven applications. These tasks typically demand additional research and analysis. The proper technique and strategy must be chosen to ensure the success of data-driven projects. Otherwise, even exerting a lot of effort, the necessary development might not always be possible. In this post, an effort to examine the workflow of data-driven software development projects and its implementation process in order to describe how to manage a project successfully. Which will assist in minimizing the added workload.

Keywords: data, data-driven projects, data science, NLP, software project

Procedia PDF Downloads 69
24306 Relationship between the Ability of Accruals and Non-Systematic Risk of Shares for Companies Listed in Stock Exchange: Case Study, Tehran

Authors: Lina Najafian, Hamidreza Vakilifard

Abstract:

The present study focused on the relationship between the quality of accruals and non-systematic risk. The independent study variables included the ability of accruals, the information content of accruals, and amount of discretionary accruals considered as accruals quality measures. The dependent variable was non-systematic risk based on the Fama and French Three Factor model (FFTFM) and the capital asset pricing model (CAPM). The control variables were firm size, financial leverage, stock return, cash flow fluctuations, and book-to-market ratio. The data collection method was based on library research and document mining including financial statements. Multiple regression analysis was used to analyze the data. The study results showed that there is a significant direct relationship between financial leverage and discretionary accruals and non-systematic risk based on FFTFM and CAPM. There is also a significant direct relationship between the ability of accruals, information content of accruals, firm size, and stock return and non-systematic based on both models. It was also found that there is no relationship between book-to-market ratio and cash flow fluctuations and non-systematic risk.

Keywords: accruals quality, non-systematic risk, CAPM, FFTFM

Procedia PDF Downloads 152
24305 From Sampling to Sustainable Phosphate Recovery from Mine Waste Rock Piles

Authors: Hicham Amar, Mustapha El Ghorfi, Yassine Taha, Abdellatif Elghali, Rachid Hakkou, Mostafa Benzaazoua

Abstract:

Phosphate mine waste rock (PMWR) generated during ore extraction is continuously increasing, resulting in a significant environmental footprint. The main objectives of this study consist of i) elaboration of the sampling strategy of PMWR piles, ii) a mineralogical and chemical characterization of PMWR piles, and iii) 3D block model creation to evaluate the potential valorization of the existing PMWR. Destructive drilling using reverse circulation from 13 drills was used to collect samples for chemical (X-ray fluorescence analysis) and mineralogical assays. The 3D block model was created based on the data set, including chemical data of the realized drills using Datamine RM software. The optical microscopy observations showed that the sandy phosphate from drills in the PMWR piles is characterized by the abundance of carbonate fluorapatite with the presence of calcite, dolomite, and quartz. The mean grade of composite samples was around 19.5±2.7% for P₂O₅. The mean grade of P₂O₅ exhibited an increasing tendency by depth profile from bottom to top of PMWR piles. 3D block model generated with chemical data confirmed the tendency of the mean grades’ variation and may allow a potential selective extraction according to %P₂O₅. The 3D block model of P₂O₅ grade is an efficient sampling approach that confirmed the variation of P₂O₅ grade. This integrated approach for PMWR management will be a helpful tool for decision-making to recover the residual phosphate, adopting the circular economy and sustainability in the phosphate mining industry.

Keywords: 3D modelling, reverse circulation drilling, circular economy, phosphate mine waste rock, sampling

Procedia PDF Downloads 66
24304 Cardiovascular Disease Prediction Using Machine Learning Approaches

Authors: P. Halder, A. Zaman

Abstract:

It is estimated that heart disease accounts for one in ten deaths worldwide. United States deaths due to heart disease are among the leading causes of death according to the World Health Organization. Cardiovascular diseases (CVDs) account for one in four U.S. deaths, according to the Centers for Disease Control and Prevention (CDC). According to statistics, women are more likely than men to die from heart disease as a result of strokes. A 50% increase in men's mortality was reported by the World Health Organization in 2009. The consequences of cardiovascular disease are severe. The causes of heart disease include diabetes, high blood pressure, high cholesterol, abnormal pulse rates, etc. Machine learning (ML) can be used to make predictions and decisions in the healthcare industry. Thus, scientists have turned to modern technologies like Machine Learning and Data Mining to predict diseases. The disease prediction is based on four algorithms. Compared to other boosts, the Ada boost is much more accurate.

Keywords: heart disease, cardiovascular disease, coronary artery disease, feature selection, random forest, AdaBoost, SVM, decision tree

Procedia PDF Downloads 142
24303 Mineral Deposits in Spatial Planning Systems – Review of European Practices

Authors: Alicja Kot-Niewiadomska

Abstract:

Securing sustainable access to raw materials is vital for the growth of the European economy and for the goals laid down in Strategy Europe 2020. One of the most important sources of mineral raw materials are primary deposits. The efficient management of them, including extraction, will ensure competitiveness of the European economy. A critical element of this approach is mineral deposits safeguarding and the most important tool - spatial planning. The safeguarding of deposits should be understood as safeguarding of land access, and safeguarding of area against development, which may (potential) prevent the use of the deposit and the necessary mining activities. Many European Union countries successfully integrated their mineral policy and spatial policy, which has ensured the proper place of mineral deposits in their spatial planning systems. These, in turn, are widely recognized as the most important mineral deposit safeguarding tool, the essence of which is to ensure long-term access to its resources. The examples of Austria, Portugal, Slovakia, Czech Republic, Sweden, and the United Kingdom, discussed in the paper, are often mentioned as examples of good practices in this area. Although none of these countries managed to avoid cases of social and environmental conflicts related to mining activities, the solutions they implement certainly deserve special attention. And for many countries, including Poland, they can be a potential source of solutions aimed at improving the protection of mineral deposits.

Keywords: mineral deposits, land use planning, mineral deposit safeguarding, European practices

Procedia PDF Downloads 164
24302 Ecotourism Sites in Central Visayas, Philippines: A Green Business Profile

Authors: Ivy Jumao-As, Randy Lupango, Clifford Villaflores, Marites Khanser

Abstract:

Alongside inadequate implementation of ecotourism standards and other pressing issues on sustainable development is the lack of business plans and formal business structures of various ecotourism sites in the Central Visayas, Philippines, and other parts of the country. Addressing these issues plays a key role to boost ecotourism which is a sustainability tool to the country’s economic development. A three-phase research is designed to investigate the green business practices of selected ecotourism sites in the region in order to propose a business model for ecotourism destinations in the region and outside. This paper reports the initial phase of the study which described the sites’ profile as well as operators of the following selected destinations: Cebu City Protected Landscape and Olango Island Wildlife Bird Sanctuary in Cebu, Rajah Sikatuna Protected Landscape in Bohol. Interview, Self-Administered Questionnaire with key informants and Data Mining were employed in the data collection. Findings highlighted similarities and differences in terms of eco-tourism products, type and number of visitors, manpower composition, cultural and natural resources, complementary services and products, awards and accreditation, peak and off peak seasons, among others. Recommendations based from common issues initially identified in this study are also highlighted.

Keywords: ecotourism, ecotourism sites, green business, sustainability

Procedia PDF Downloads 251
24301 The Relationship Between Artificial Intelligence, Data Science, and Privacy

Authors: M. Naidoo

Abstract:

Artificial intelligence often requires large amounts of good quality data. Within important fields, such as healthcare, the training of AI systems predominately relies on health and personal data; however, the usage of this data is complicated by various layers of law and ethics that seek to protect individuals’ privacy rights. This research seeks to establish the challenges AI and data sciences pose to (i) informational rights, (ii) privacy rights, and (iii) data protection. To solve some of the issues presented, various methods are suggested, such as embedding values in technological development, proper balancing of rights and interests, and others.

Keywords: artificial intelligence, data science, law, policy

Procedia PDF Downloads 96
24300 Simulation Data Summarization Based on Spatial Histograms

Authors: Jing Zhao, Yoshiharu Ishikawa, Chuan Xiao, Kento Sugiura

Abstract:

In order to analyze large-scale scientific data, research on data exploration and visualization has gained popularity. In this paper, we focus on the exploration and visualization of scientific simulation data, and define a spatial V-Optimal histogram for data summarization. We propose histogram construction algorithms based on a general binary hierarchical partitioning as well as a more specific one, the l-grid partitioning. For effective data summarization and efficient data visualization in scientific data analysis, we propose an optimal algorithm as well as a heuristic algorithm for histogram construction. To verify the effectiveness and efficiency of the proposed methods, we conduct experiments on the massive evacuation simulation data.

Keywords: simulation data, data summarization, spatial histograms, exploration, visualization

Procedia PDF Downloads 169
24299 A Practical and Theoretical Study on the Electromotor Bearing Defect Detection in a Wet Mill Using the Vibration Analysis Method and Defect Length Calculation in the Bearing

Authors: Mostafa Firoozabadi, Alireza Foroughi Nematollahi

Abstract:

Wet mills are one of the most important equipment in the mining industries and any defect occurrence in them can stop the production line and it can make some irrecoverable damages to the system. Electromotors are the significant parts of a mill and their monitoring is a necessary process to prevent unwanted defects. The purpose of this study is to investigate the Electromotor bearing defects, theoretically and practically, using the vibration analysis method. When a defect happens in a bearing, it can be transferred to the other parts of the equipment like inner ring, outer ring, balls, and the bearing cage. The electromotor defects source can be electrical or mechanical. Sometimes, the electrical and mechanical defect frequencies are modulated and the bearing defect detection becomes difficult. In this paper, to detect the electromotor bearing defects, the electrical and mechanical defect frequencies are extracted firstly. Then, by calculating the bearing defect frequencies, and the spectrum and time signal analysis, the bearing defects are detected. In addition, the obtained frequency determines that the bearing level in which the defect has happened and by comparing this level to the standards it determines the bearing remaining lifetime. Finally, the defect length is calculated by theoretical equations to demonstrate that there is no need to replace the bearing. The results of the proposed method, which has been implemented on the wet mills in the Golgohar mining and industrial company in Iran, show that this method is capable of detecting the electromotor bearing defects accurately and on time.

Keywords: bearing defect length, defect frequency, electromotor defects, vibration analysis

Procedia PDF Downloads 492
24298 Development of Knowledge Discovery Based Interactive Decision Support System on Web Platform for Maternal and Child Health System Strengthening

Authors: Partha Saha, Uttam Kumar Banerjee

Abstract:

Maternal and Child Healthcare (MCH) has always been regarded as one of the important issues globally. Reduction of maternal and child mortality rates and increase of healthcare service coverage were declared as one of the targets in Millennium Development Goals till 2015 and thereafter as an important component of the Sustainable Development Goals. Over the last decade, worldwide MCH indicators have improved but could not match the expected levels. Progress of both maternal and child mortality rates have been monitored by several researchers. Each of the studies has stated that only less than 26% of low-income and middle income countries (LMICs) were on track to achieve targets as prescribed by MDG4. Average worldwide annual rate of reduction of under-five mortality rate and maternal mortality rate were 2.2% and 1.9% as on 2011 respectively whereas rates should be minimum 4.4% and 5.5% annually to achieve targets. In spite of having proven healthcare interventions for both mothers and children, those could not be scaled up to the required volume due to fragmented health systems, especially in the developing and under-developed countries. In this research, a knowledge discovery based interactive Decision Support System (DSS) has been developed on web platform which would assist healthcare policy makers to develop evidence-based policies. To achieve desirable results in MCH, efficient resource planning is very much required. In maximum LMICs, resources are big constraint. Knowledge, generated through this system, would help healthcare managers to develop strategic resource planning for combatting with issues like huge inequity and less coverage in MCH. This system would help healthcare managers to accomplish following four tasks. Those are a) comprehending region wise conditions of variables related with MCH, b) identifying relationships within variables, c) segmenting regions based on variables status, and d) finding out segment wise key influential variables which have major impact on healthcare indicators. Whole system development process has been divided into three phases. Those were i) identifying contemporary issues related with MCH services and policy making; ii) development of the system; and iii) verification and validation of the system. More than 90 variables under three categories, such as a) educational, social, and economic parameters; b) MCH interventions; and c) health system building blocks have been included into this web-based DSS and five separate modules have been developed under the system. First module has been designed for analysing current healthcare scenario. Second module would help healthcare managers to understand correlations among variables. Third module would reveal frequently-occurring incidents along with different MCH interventions. Fourth module would segment regions based on previously mentioned three categories and in fifth module, segment-wise key influential interventions will be identified. India has been considered as case study area in this research. Data of 601 districts of India has been used for inspecting effectiveness of those developed modules. This system has been developed by importing different statistical and data mining techniques on Web platform. Policy makers would be able to generate different scenarios from the system before drawing any inference, aided by its interactive capability.

Keywords: maternal and child heathcare, decision support systems, data mining techniques, low and middle income countries

Procedia PDF Downloads 248
24297 Using Multi-Arm Bandits to Optimize Game Play Metrics and Effective Game Design

Authors: Kenny Raharjo, Ramon Lawrence

Abstract:

Game designers have the challenging task of building games that engage players to spend their time and money on the game. There are an infinite number of game variations and design choices, and it is hard to systematically determine game design choices that will have positive experiences for players. In this work, we demonstrate how multi-arm bandits can be used to automatically explore game design variations to achieve improved player metrics. The advantage of multi-arm bandits is that they allow for continuous experimentation and variation, intrinsically converge to the best solution, and require no special infrastructure to use beyond allowing minor game variations to be deployed to users for evaluation. A user study confirms that applying multi-arm bandits was successful in determining the preferred game variation with highest play time metrics and can be a useful technique in a game designer's toolkit.

Keywords: game design, multi-arm bandit, design exploration and data mining, player metric optimization and analytics

Procedia PDF Downloads 506
24296 Analysis of the Development of Mining Companies Social Corporate Responsibility Based on the Rating Score

Authors: Tatiana Ponomarenko, Oksana Marinina, Marina Nevskaya

Abstract:

Modern corporate social responsibility (CSR) is a sphere of multilevel responsibility of a company toward society represented by various stakeholders. The relevance of CSR management grows due to the active development of socially responsible investing (principles for responsible investment) taking into account factors of environmental, social and corporate governance (ESG), growing attention of the investment community in general to the long-term stability of companies and the quality of control of nonfinancial risks. The modern approach to CSR strategic management is aimed at the creation of trustful relationships with stakeholders, on the basis of which a contribution to the sustainable development of companies, regions, and national economics is insured. However, the practical concepts of social responsibility in mining companies are different, which leads to various degrees of application of CSR. A number of companies implement CSR using a traditional (limited) understanding of responsibility toward employees and counteragents, the others understand CSR much wider and try to use leverages of efficient cooperation. As in large mining companies the scope of CSR measures is diverse and characterized by different indices, the study was aimed at evaluating CSR efficiency on the basis of a proprietary methodology and determining the level of development of CSR management in terms of anti-crisis, reactive and proactive development. The methodology of the research includes analysis of integrated global reporting initiative (GRI) reports of large mining companies; choice of most representative sectoral agents by a criterion of the regularity of issuance and publication of reports; calculation of indices of evaluation of CSR level of the selected companies in dynamics. The methodology of evaluation of CSR level is based on a rating score of changes in standard indices of GRI reports by economic, environmental, and social directions. Result. By the results of the analysis, companies of fuel and energy and metallurgic complexes, in overwhelming majority, reflecting three indices out of a wide range of possible indicators of SDGs (Sustainable Development Goals), were selected for the study. The evaluation of the scopes of CSR of the companies Gazprom, LUKOIL, Metalloinvest, Nornikel, Rosneft, Severstal, SIBUR, SUEK corresponds to the reactive type of development according to a scale of CSR strategic management, which is the average value out of the possible values. The chief drawback is that companies, in the process of analyzing global goals, often choose the goals which relate to their own activities, paying insufficient attention to the interests of the stakeholders inside the country. This fact evidences the necessity of searching for more effective mechanisms of CSR control. Acknowledgment: This article is prepared within grant support of the RFBR, project 19-510-44013 'Development of the concept of mineral resources value formation in the context of sustainable development in resource-oriented economies'.

Keywords: sustainable development, corporate social responsibility, development strategies, efficiency assessment

Procedia PDF Downloads 127
24295 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation

Procedia PDF Downloads 229
24294 Computational Chemical-Composition of Carbohydrates in the Context of Healthcare Informatics

Authors: S. Chandrasekaran, S. Nandita, M. Shivathmika, Srikrishnan Shivakumar

Abstract:

The objective of the research work is to analyze the computational chemical-composition of carbohydrates in the context of healthcare informatics. The computation involves the representation of complex chemical molecular structure of carbohydrate using graph theory and in a deployable Chemical Markup Language (CML). The parallel molecular structure of the chemical molecules with or without other adulterants for the sake of business profit can be analyzed in terms of robustness and derivatization measures. The rural healthcare program should create awareness in malnutrition to reduce ill-effect of decomposition and help the consumers to know the level of such energy storage mixtures in a quantitative way. The earlier works were based on the empirical and wet data which can vary from time to time but cannot be made to reuse the results of mining. The work is carried out on the quantitative computational chemistry on carbohydrates to provide a safe and secure right to food act and its regulations.

Keywords: carbohydrates, chemical-composition, chemical markup, robustness, food safety

Procedia PDF Downloads 370
24293 Big Data: Concepts, Technologies and Applications in the Public Sector

Authors: A. Alexandru, C. A. Alexandru, D. Coardos, E. Tudora

Abstract:

Big Data (BD) is associated with a new generation of technologies and architectures which can harness the value of extremely large volumes of very varied data through real time processing and analysis. It involves changes in (1) data types, (2) accumulation speed, and (3) data volume. This paper presents the main concepts related to the BD paradigm, and introduces architectures and technologies for BD and BD sets. The integration of BD with the Hadoop Framework is also underlined. BD has attracted a lot of attention in the public sector due to the newly emerging technologies that allow the availability of network access. The volume of different types of data has exponentially increased. Some applications of BD in the public sector in Romania are briefly presented.

Keywords: big data, big data analytics, Hadoop, cloud

Procedia PDF Downloads 300
24292 The Regulation of Reputational Information in the Sharing Economy

Authors: Emre Bayamlıoğlu

Abstract:

This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.

Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy

Procedia PDF Downloads 458
24291 Semantic Data Schema Recognition

Authors: Aïcha Ben Salem, Faouzi Boufares, Sebastiao Correia

Abstract:

The subject covered in this paper aims at assisting the user in its quality approach. The goal is to better extract, mix, interpret and reuse data. It deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.

Keywords: schema recognition, semantic data profiling, meta-categorisation, semantic dependencies inter columns

Procedia PDF Downloads 412