Search results for: ArcGIS data analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 42037

Search results for: ArcGIS data analysis

41347 Generation of Automated Alarms for Plantwide Process Monitoring

Authors: Hyun-Woo Cho

Abstract:

Earlier detection of incipient abnormal operations in terms of plant-wide process management is quite necessary in order to improve product quality and process safety. And generating warning signals or alarms for operating personnel plays an important role in process automation and intelligent plant health monitoring. Various methodologies have been developed and utilized in this area such as expert systems, mathematical model-based approaches, multivariate statistical approaches, and so on. This work presents a nonlinear empirical monitoring methodology based on the real-time analysis of massive process data. Unfortunately, the big data includes measurement noises and unwanted variations unrelated to true process behavior. Thus the elimination of such unnecessary patterns of the data is executed in data processing step to enhance detection speed and accuracy. The performance of the methodology was demonstrated using simulated process data. The case study showed that the detection speed and performance was improved significantly irrespective of the size and the location of abnormal events.

Keywords: detection, monitoring, process data, noise

Procedia PDF Downloads 252
41346 lncRNA Gene Expression Profiling Analysis by TCGA RNA-Seq Data of Breast Cancer

Authors: Xiaoping Su, Gabriel G. Malouf

Abstract:

Introduction: Breast cancer is a heterogeneous disease that can be classified in 4 subgroups using transcriptional profiling. The role of lncRNA expression in human breast cancer biology, prognosis, and molecular classification remains unknown. Methods and results: Using an integrative comprehensive analysis of lncRNA, mRNA and DNA methylation in 900 breast cancer patients from The Cancer Genome Atlas (TCGA) project, we unraveled the molecular portraits of 1,700 expressed lncRNA. Some of those lncRNAs (i.e, HOTAIR) are previously reported and others are novel (i.e, HOTAIRM1, MAPT-AS1). The lncRNA classification correlated well with the PAM50 classification for basal-like, Her-2 enriched and luminal B subgroups, in contrast to the luminal A subgroup which behaved differently. Importantly, estrogen receptor (ESR1) expression was associated with distinct lncRNA networks in lncRNA clusters III and IV. Gene set enrichment analysis for cis- and trans-acting lncRNA showed enrichment for breast cancer signatures driven by breast cancer master regulators. Almost two third of those lncRNA were marked by enhancer chromatin modifications (i.e., H3K27ac), suggesting that lncRNA expression may result in increased activity of neighboring genes. Differential analysis of gene expression profiling data showed that lncRNA HOTAIRM1 was significantly down-regulated in basal-like subtype, and DNA methylation profiling data showed that lncRNA HOTAIRM1 was highly methylated in basal-like subtype. Thus, our integrative analysis of gene expression and DNA methylation strongly suggested that lncRNA HOTAIRM1 should be a tumor suppressor in basal-like subtype. Conclusion and significance: Our study depicts the first lncRNA molecular portrait of breast cancer and shows that lncRNA HOTAIRM1 might be a novel tumor suppressor.

Keywords: lncRNA profiling, breast cancer, HOTAIRM1, tumor suppressor

Procedia PDF Downloads 104
41345 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen

Abstract:

To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 158
41344 The Influence of the Vocational Teachers Empowerment toward the Vocational High Schools’ Performance Based on the Education National Standards of Indonesia

Authors: Abdul Haris Setiawan

Abstract:

Teachers empowerment is one of the important factors considered to contribute significantly to the achievement of the national education goals. This study was conducted to determine the influence on the vocational teachers empowerment toward the performance of the vocational high schools based on the Education National Standards of Indonesia. The population of the study was all vocational teachers at the State Vocational High schools in Surakarta, Central Java Province, Indonesia. The sampling technique used proportional random sampling technique. This study used a quantitative descriptive statistical analysis techniques. The data was collected using questionnaires. The data has been collected and then tested using analysis requirements test. Having tested using the requirements analysis and then the data processed using regression analysis between the independent and dependent variables to determine the effect and the regression equation. The results of the study found that the level of vocational high schools’ performance based on the Education National Standards of Indonesia was 74.29%, including in the high category; the level of vocational teachers empowerment was 76.20%, including in the high category; there was a positive influence of vocational teachers empowerment toward the vocational high schools’ performance based on the Education National Standards of Indonesia with a correlation coefficient of 0,886, and a contribution of 78.50% with the regression equation Y = 79.431 +0.534 X.

Keywords: vocational teachers, empowerment, vocational high school, the education national standards

Procedia PDF Downloads 392
41343 Secured Embedding of Patient’s Confidential Data in Electrocardiogram Using Chaotic Maps

Authors: Butta Singh

Abstract:

This paper presents a chaotic map based approach for secured embedding of patient’s confidential data in electrocardiogram (ECG) signal. The chaotic map generates predefined locations through the use of selective control parameters. The sample value difference method effectually hides the confidential data in ECG sample pairs at these predefined locations. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through various statistical and clinical performance measures. Statistical metrics comprise of Percentage Root Mean Square Difference (PRD) and Peak Signal to Noise Ratio (PSNR). Further, a comparative analysis between proposed method and existing approaches was also performed. The results clearly demonstrated the superiority of proposed method.

Keywords: chaotic maps, ECG steganography, data embedding, electrocardiogram

Procedia PDF Downloads 193
41342 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 175
41341 Denoising Transient Electromagnetic Data

Authors: Lingerew Nebere Kassie, Ping-Yu Chang, Hsin-Hua Huang, , Chaw-Son Chen

Abstract:

Transient electromagnetic (TEM) data plays a crucial role in hydrogeological and environmental applications, providing valuable insights into geological structures and resistivity variations. However, the presence of noise often hinders the interpretation and reliability of these data. Our study addresses this issue by utilizing a FASTSNAP system for the TEM survey, which operates at different modes (low, medium, and high) with continuous adjustments to discretization, gain, and current. We employ a denoising approach that processes the raw data obtained from each acquisition mode to improve signal quality and enhance data reliability. We use a signal-averaging technique for each mode, increasing the signal-to-noise ratio. Additionally, we utilize wavelet transform to suppress noise further while preserving the integrity of the underlying signals. This approach significantly improves the data quality, notably suppressing severe noise at late times. The resulting denoised data exhibits a substantially improved signal-to-noise ratio, leading to increased accuracy in parameter estimation. By effectively denoising TEM data, our study contributes to a more reliable interpretation and analysis of underground structures. Moreover, the proposed denoising approach can be seamlessly integrated into existing ground-based TEM data processing workflows, facilitating the extraction of meaningful information from noisy measurements and enhancing the overall quality and reliability of the acquired data.

Keywords: data quality, signal averaging, transient electromagnetic, wavelet transform

Procedia PDF Downloads 81
41340 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 204
41339 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 277
41338 Use of Machine Learning in Data Quality Assessment

Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho

Abstract:

Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.

Keywords: machine learning, data quality, quality dimension, quality assessment

Procedia PDF Downloads 146
41337 Improving Security in Healthcare Applications Using Federated Learning System With Blockchain Technology

Authors: Aofan Liu, Qianqian Tan, Burra Venkata Durga Kumar

Abstract:

Data security is of the utmost importance in the healthcare area, as sensitive patient information is constantly sent around and analyzed by many different parties. The use of federated learning, which enables data to be evaluated locally on devices rather than being transferred to a central server, has emerged as a potential solution for protecting the privacy of user information. To protect against data breaches and unauthorized access, federated learning alone might not be adequate. In this context, the application of blockchain technology could provide the system extra protection. This study proposes a distributed federated learning system that is built on blockchain technology in order to enhance security in healthcare. This makes it possible for a wide variety of healthcare providers to work together on data analysis without raising concerns about the confidentiality of the data. The technical aspects of the system, including as the design and implementation of distributed learning algorithms, consensus mechanisms, and smart contracts, are also investigated as part of this process. The technique that was offered is a workable alternative that addresses concerns about the safety of healthcare while also fostering collaborative research and the interchange of data.

Keywords: data privacy, distributed system, federated learning, machine learning

Procedia PDF Downloads 130
41336 A Non-parametric Clustering Approach for Multivariate Geostatistical Data

Authors: Francky Fouedjio

Abstract:

Multivariate geostatistical data have become omnipresent in the geosciences and pose substantial analysis challenges. One of them is the grouping of data locations into spatially contiguous clusters so that data locations within the same cluster are more similar while clusters are different from each other, in some sense. Spatially contiguous clusters can significantly improve the interpretation that turns the resulting clusters into meaningful geographical subregions. In this paper, we develop an agglomerative hierarchical clustering approach that takes into account the spatial dependency between observations. It relies on a dissimilarity matrix built from a non-parametric kernel estimator of the spatial dependence structure of data. It integrates existing methods to find the optimal cluster number and to evaluate the contribution of variables to the clustering. The capability of the proposed approach to provide spatially compact, connected and meaningful clusters is assessed using bivariate synthetic dataset and multivariate geochemical dataset. The proposed clustering method gives satisfactory results compared to other similar geostatistical clustering methods.

Keywords: clustering, geostatistics, multivariate data, non-parametric

Procedia PDF Downloads 476
41335 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges

Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars

Abstract:

In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.

Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting

Procedia PDF Downloads 152
41334 Exploring the Capabilities of Sentinel-1A and Sentinel-2A Data for Landslide Mapping

Authors: Ismayanti Magfirah, Sartohadi Junun, Samodra Guruh

Abstract:

Landslides are one of the most frequent and devastating natural disasters in Indonesia. Many studies have been conducted regarding this phenomenon. However, there is a lack of attention in the landslide inventory mapping. The natural condition (dense forest area) and the limited human and economic resources are some of the major problems in building landslide inventory in Indonesia. Considering the importance of landslide inventory data in susceptibility, hazard, and risk analysis, it is essential to generate landslide inventory based on available resources. In order to achieve this, the first thing we have to do is identify the landslides' location. The presence of Sentinel-1A and Sentinel-2A data gives new insights into land monitoring investigation. The free access, high spatial resolution, and short revisit time, make the data become one of the most trending open sources data used in landslide mapping. Sentinel-1A and Sentinel-2A data have been used broadly for landslide detection and landuse/landcover mapping. This study aims to generate landslide map by integrating Sentinel-1A and Sentinel-2A data use change detection method. The result will be validated by field investigation to make preliminary landslide inventory in the study area.

Keywords: change detection method, landslide inventory mapping, Sentinel-1A, Sentinel-2A

Procedia PDF Downloads 169
41333 3D Frictionless Contact Case between the Structure of E-Bike and the Ground

Authors: Lele Zhang, Hui Leng Choo, Alexander Konyukhov, Shuguang Li

Abstract:

China is currently the world's largest producer and distributor of electric bicycle (e-bike). The increasing number of e-bikes on the road is accompanied by rising injuries and even deaths of e-bike drivers. Therefore, there is a growing need to improve the safety structure of e-bikes. This 3D frictionless contact analysis is a preliminary, but necessary work for further structural design improvement of an e-bike. The contact analysis between e-bike and the ground was carried out as follows: firstly, the Penalty method was illustrated and derived from the simplest spring-mass system. This is one of the most common methods to satisfy the frictionless contact case; secondly, ANSYS static analysis was carried out to verify finite element (FE) models with contact pair (without friction) between e-bike and the ground; finally, ANSYS transient analysis was used to obtain the data of the penetration p(u) of e-bike with respect to the ground. Results obtained from the simulation are as estimated by comparing with that from theoretical method. In the future, protective shell will be designed following the stability criteria and added to the frame of e-bike. Simulation of side falling of the improved safety structure of e-bike will be confirmed with experimental data.

Keywords: frictionless contact, penalty method, e-bike, finite element

Procedia PDF Downloads 277
41332 Education in the Constitutions: The Comparison of Turkey with Indonesia, France, Japan, South Africa, and the United States of America

Authors: Mehmet Durnali

Abstract:

The main purpose of this study is to find out, analyze and discuss basic principles of education and training in the constitutions, including the latest amendment, of France, Indonesia, Japan, South Africa, the United States of America, and Turkey. This research specifically aims at establishing a framework in order to compare educational values such as right of education, responsibilities of states and those of people, and other issues pertaining to education in the Constitution of Turkey to others. Additionally, it emphasizes the meaning of education in constitution, the reasons for references to education in constitutions and why it is important for people, states or nations and state organs. Qualitative analysis technique is performed to accomplish the aim of this study. Maximum variation sampling is used. The main data source of the analysis is official organic laws of those countries. The data is examined by using descriptive and content analysis method.

Keywords: education in the constitution, education law, legal principles of education, right to education

Procedia PDF Downloads 315
41331 Multi-Criteria Inventory Classification Process Based on Logical Analysis of Data

Authors: Diana López-Soto, Soumaya Yacout, Francisco Ángel-Bello

Abstract:

Although inventories are considered as stocks of money sitting on shelve, they are needed in order to secure a constant and continuous production. Therefore, companies need to have control over the amount of inventory in order to find the balance between excessive and shortage of inventory. The classification of items according to certain criteria such as the price, the usage rate and the lead time before arrival allows any company to concentrate its investment in inventory according to certain ranking or priority of items. This makes the decision making process for inventory management easier and more justifiable. The purpose of this paper is to present a new approach for the classification of new items based on the already existing criteria. This approach is called the Logical Analysis of Data (LAD). It is used in this paper to assist the process of ABC items classification based on multiple criteria. LAD is a data mining technique based on Boolean theory that is used for pattern recognition. This technique has been tested in medicine, industry, credit risk analysis, and engineering with remarkable results. An application on ABC inventory classification is presented for the first time, and the results are compared with those obtained when using the well-known AHP technique and the ANN technique. The results show that LAD presented very good classification accuracy.

Keywords: ABC multi-criteria inventory classification, inventory management, multi-class LAD model, multi-criteria classification

Procedia PDF Downloads 880
41330 Extent of Derivative Usage, Firm Value and Risk: An Empirical Study on Pakistan Non-Financial Firms

Authors: Atia Alam

Abstract:

Growing liberalisation and intense market competition increase firm’s risk exposure and induce corporations to use derivatives extensively as a risk management instrument, which results in decrease in firm’s risk, and increase in value. Present study contributes towards existing literature by providing an in-depth analysis regarding the effect of extent of derivative usage on firm’s risk and value by using panel data models and seemingly unrelated regression technique. New evidence is established in current literature by dividing the sample data based on firm’s Exchange Rate (ER) and Interest Rate (IR) exposure. Analysis is performed for the effect of extent of derivative usage on firm’s risk and value and its variation with respect to the ER and IR exposure. Sample data consists of 166 Pakistani firms listed on Pakistan stock exchange for the period of 2004-2010. Results show that extensive usage of derivative instruments significantly increases firm value and reduces firm’s risk. Furthermore, comprehensive analysis depicts that Pakistani corporations having higher exchange rate exposure, with respect to foreign sales, and higher interest rate exposure, on the basis of industry adjusted leverage, have higher firm value and lower risk. Findings from seemingly unrelated regression also provide robustness to results obtained through panel data analysis. Study also highlights the role of derivative usage as a risk management instrument in high and low ER and IR risk and helps practitioners in understanding how value increasing effect of extent of derivative usage varies with the intensity of firm’s risk exposure.

Keywords: extent of derivative usage, firm value, risk, Pakistan, non-financial firms

Procedia PDF Downloads 355
41329 Adopting Structured Mini Writing Retreats as a Tool for Undergraduate Researchers

Authors: Clare Cunningham

Abstract:

Whilst there is a strong global research base on the benefits of structured writing retreats and similar provisions, such as Shut Up and Write events, for academic staff and postgraduate researchers, very little has been published about the worth of such events for undergraduate students. This is despite the fact that, internationally, undergraduate student researchers experience similar pressures, distractions and feelings towards writing as those who are at more senior levels within the academy. This paper reports on a mixed-methods study with cohorts of third-year undergraduate students over the course of four academic years. This involved a range of research instruments adopted over the four years of the study. They include the administration of four questionnaires across three academic years, a collection of ethnographic recordings in the second year, and the collation of reflective journal entries and evaluations from all four years. The final two years of data collection took place during the period of Covid-19 restrictions when writing retreats moved to the virtual space which adds an additional dimension of interest to the analysis. The analysis involved the collation of quantitative questionnaire data to observe patterns in expressions of attitudes towards writing. Qualitative data were analysed thematically and used to corroborate and support the quantitative data when appropriate. The resulting data confirmed that one of the biggest challenges for undergraduate students mirrors those reported in the findings of studies focused on more experienced researchers. This is not surprising, especially given the number of undergraduate students who now work alongside their studies, as well as the increasing number who have caring responsibilities, but it has, as yet, been under-reported. The data showed that the groups of writing retreat participants all had very positive experiences, with accountability, a sense of community and procrastination avoidance some of the key aspects. The analysis revealed the sometimes transformative power of these events for a number of these students in terms of changing the way they viewed writing and themselves as writers. The data presented in this talk will support the proposal that retreats should much more widely be offered to undergraduate students across the world.

Keywords: academic writing, students, undergraduates, writing retreat

Procedia PDF Downloads 197
41328 Development of Automatic Laser Scanning Measurement Instrument

Authors: Chien-Hung Liu, Yu-Fen Chen

Abstract:

This study used triangular laser probe and three-axial direction mobile platform for surface measurement, programmed it and applied it to real-time analytic statistics of different measured data. This structure was used to design a system integration program: using triangular laser probe for scattering or reflection non-contact measurement, transferring the captured signals to the computer through RS-232, and using RS-485 to control the three-axis platform for a wide range of measurement. The data captured by the laser probe are formed into a 3D surface. This study constructed an optical measurement application program in the concept of visual programming language. First, the signals are transmitted to the computer through RS-232/RS-485, and then the signals are stored and recorded in graphic interface timely. This programming concept analyzes various messages, and makes proper presentation graphs and data processing to provide the users with friendly graphic interfaces and data processing state monitoring, and identifies whether the present data are normal in graphic concept. The major functions of the measurement system developed by this study are thickness measurement, SPC, surface smoothness analysis, and analytical calculation of trend line. A result report can be made and printed promptly. This study measured different heights and surfaces successfully, performed on-line data analysis and processing effectively, and developed a man-machine interface for users to operate.

Keywords: laser probe, non-contact measurement, triangulation measurement principle, statistical process control, labVIEW

Procedia PDF Downloads 359
41327 A New Approach for Improving Accuracy of Multi Label Stream Data

Authors: Kunal Shah, Swati Patel

Abstract:

Many real world problems involve data which can be considered as multi-label data streams. Efficient methods exist for multi-label classification in non streaming scenarios. However, learning in evolving streaming scenarios is more challenging, as the learners must be able to adapt to change using limited time and memory. Classification is used to predict class of unseen instance as accurate as possible. Multi label classification is a variant of single label classification where set of labels associated with single instance. Multi label classification is used by modern applications, such as text classification, functional genomics, image classification, music categorization etc. This paper introduces the task of multi-label classification, methods for multi-label classification and evolution measure for multi-label classification. Also, comparative analysis of multi label classification methods on the basis of theoretical study, and then on the basis of simulation was done on various data sets.

Keywords: binary relevance, concept drift, data stream mining, MLSC, multiple window with buffer

Procedia PDF Downloads 583
41326 Security Analysis and Implementation of Achterbahn-128 for Images Encryption

Authors: Aissa Belmeguenai, Oulaya Berrak, Khaled Mansouri

Abstract:

In this work, efficiency implementation and security evaluation of the keystream generator of Achterbahn-128 for images encryption and decryption was introduced. The implementation for this simulated project is written with MATLAB.7.5. First of all, two different original images are used to validate the proposed design. The developed program is used to transform the original images data into digital image file. Finally, the proposed program is implemented to encrypt and decrypt images data. Several tests are done to prove the design performance, including visual tests and security evaluation.

Keywords: Achterbahn-128, keystream generator, stream cipher, image encryption, security analysis

Procedia PDF Downloads 312
41325 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 122
41324 Review of Different Machine Learning Algorithms

Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui

Abstract:

Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.

Keywords: Data Mining, Web Mining, classification, ML Algorithms

Procedia PDF Downloads 301
41323 Trends of Conservation and Development in Mexican Biosphere Reserves: Spatial Analysis and Linear Mixed Model

Authors: Cecilia Sosa, Fernanda Figueroa, Leonardo Calzada

Abstract:

Biosphere reserves (BR) are considered as the main strategy for biodiversity and ecosystems conservation. Mexican BR are mainly inhabited by rural communities who strongly depend on forests and their resources. Even though the dual objective of conservation and development has been sought in BR, land cover change is a common process in these areas, while most rural communities are highly marginalized, partly as a result of restrictions imposed by conservation to the access and use of resources. Achieving ecosystems conservation and social development face serious challenges. Factors such as financial support for development projects (public/private), environmental conditions, infrastructure and regional economic conditions might influence both land use change and wellbeing. Examining the temporal trends of conservation and development in BR is central for the evaluation of outcomes for these conservation strategies. In this study, we analyzed changes in primary vegetation cover (as a proxy for conservation) and the index of marginalization (as a proxy for development) in Mexican BR (2000-2015); we also explore the influence of various factors affecting these trends, such as conservation-development projects financial support (public or private), geographical distribution in ecoregions (as a proxy for shared environmental conditions) and in economic zones (as a proxy for regional economic conditions). We developed a spatial analysis at the municipal scale (2,458 municipalities nationwide) in ArcGIS, to obtain road densities, geographical distribution in ecoregions and economic zones, the financial support received, and the percent of municipality area under protection by protected areas and, particularly, by BR. Those municipalities with less than 25% of area under protection were regarded as part of the protected area. We obtained marginalization indexes for all municipalities and, using MODIS in Google Earth Engine, the number of pixels covered by primary vegetation. We used a linear mixed model in RStudio for the analysis. We found a positive correlation between the marginalization index and the percent of primary vegetation cover per year (r=0.49-0.5); i.e., municipalities with higher marginalization also show higher percent of primary vegetation cover. Also, those municipalities with higher area under protection have more development projects (r=0.46) and some environmental conditions were relevant for percent of vegetation cover. Time, economic zones and marginalization index were all important. Time was particularly, in 2005, when both marginalization and deforestation decreased. Road densities and financial support for conservation-development projects were irrelevant as factors in the general correlation. Marginalization is still being affected by the conservation strategies applied in BR, even though that this management category considers both conservation and development of local communities as its objectives. Our results suggest that roads densities and support for conservation-development projects have not been a factor of poverty alleviation. As better conservation is being attained in the most impoverished areas, we face the dilemma of how to improve wellbeing in rural communities under conservation, since current strategies have not been able to leave behind the conservation-development contraposition.

Keywords: deforestation, local development, marginalization, protected areas

Procedia PDF Downloads 134
41322 The Potential Threat of Cyberterrorism to the National Security: Theoretical Framework

Authors: Abdulrahman S. Alqahtani

Abstract:

The revolution of computing and networks could revolutionise terrorism in the same way that it has brought about changes in other aspects of life. The modern technological era has faced countries with a new set of security challenges. There are many states and potential adversaries who have the potential and capacity in cyberspace, which makes them able to carry out cyber-attacks in the future. Some of them are currently conducting surveillance, gathering and analysis of technical information, and mapping of networks and nodes and infrastructure of opponents, which may be exploited in future conflicts. This poster presents the results of the quantitative study (survey) to test the validity of the proposed theoretical framework for the cyber terrorist threats. This theoretical framework will help to in-depth understand these new digital terrorist threats. It may also be a practical guide for managers and technicians in critical infrastructure, to understand and assess the threats they face. It might also be the foundation for building a national strategy to counter cyberterrorism. In the beginning, it provides basic information about the data. To purify the data, reliability and exploratory factor analysis, as well as confirmatory factor analysis (CFA) were performed. Then, Structural Equation Modelling (SEM) was utilised to test the final model of the theory and to assess the overall goodness-of-fit between the proposed model and the collected data set.

Keywords: cyberterrorism, critical infrastructure, , national security, theoretical framework, terrorism

Procedia PDF Downloads 402
41321 Frequent Item Set Mining for Big Data Using MapReduce Framework

Authors: Tamanna Jethava, Rahul Joshi

Abstract:

Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.

Keywords: frequent item set mining, big data, Hadoop, MapReduce

Procedia PDF Downloads 432
41320 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach

Authors: Dongkwon Han, Sangho Kim, Sunil Kwon

Abstract:

Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.

Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance

Procedia PDF Downloads 194
41319 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 36
41318 The Benefits of Using Hijab Syar'i against Female Sexual Abuse

Authors: Catur Sigit Hartanto, Anggraeni Anisa Wara Rahmayanti

Abstract:

Objective: This research is aimed to assess the benefits of using hijab syar'i against female sexual abuse. Method: This research uses a quantitative study. The population is students in Semarang State University who wear hijab syar’i. The sampling technique uses the method of conformity. The retrieving data uses questionnaire on 30 female students as the sample. The data analysis uses descriptive analysis. Result: Using hijab syar’i provides benefits in preventing and minimizing female sexual abuse. Limitation: Respondents were limited to only 30 people.

Keywords: hijab syar’i, female, sexual abuse, student of Semarang State University

Procedia PDF Downloads 281