Search results for: process data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 34854

Search results for: process data

34734 Robust and Dedicated Hybrid Cloud Approach for Secure Authorized Deduplication

Authors: Aishwarya Shekhar, Himanshu Sharma

Abstract:

Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. In this process, duplicate data is expunged, leaving only one copy means single instance of the data to be accumulated. Though, indexing of each and every data is still maintained. Data deduplication is an approach for minimizing the part of storage space an organization required to retain its data. In most of the company, the storage systems carry identical copies of numerous pieces of data. Deduplication terminates these additional copies by saving just one copy of the data and exchanging the other copies with pointers that assist back to the primary copy. To ignore this duplication of the data and to preserve the confidentiality in the cloud here we are applying the concept of hybrid nature of cloud. A hybrid cloud is a fusion of minimally one public and private cloud. As a proof of concept, we implement a java code which provides security as well as removes all types of duplicated data from the cloud.

Keywords: confidentiality, deduplication, data compression, hybridity of cloud

Procedia PDF Downloads 356
34733 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 122
34732 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 102
34731 An AK-Chart for the Non-Normal Data

Authors: Chia-Hau Liu, Tai-Yue Wang

Abstract:

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data

Procedia PDF Downloads 394
34730 Improved K-Means Clustering Algorithm Using RHadoop with Combiner

Authors: Ji Eun Shin, Dong Hoon Lim

Abstract:

Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.

Keywords: big data, combiner, K-means clustering, RHadoop

Procedia PDF Downloads 401
34729 [Keynote Talk]: Machining Parameters Optimization with Genetic Algorithm

Authors: Dejan Tanikić, Miodrag Manić, Jelena Đoković, Saša Kalinović

Abstract:

This paper deals with the determination of the optimum machining parameters, according to the measured and modelled data of the cutting temperature and surface roughness, during the turning of the AISI 4140 steel. The high cutting temperatures are unwanted occurences in the metal cutting process. They impact negatively on the quality of the machined part. The machining experiments were performed using different cutting regimes (cutting speed, feed rate and depth of cut), with different values of the workpiece hardness, which causes different values of the measured cutting temperature as well as the measured surface roughness. The temperature and surface roughness data were modelled after that using Response Surface Methodology (RSM). The obtained RSM models are used in the process of optimization of the cutting regimes using the Genetic Algorithms (GA) tool, which enables the metal cutting process in the optimum conditions.

Keywords: genetic algorithms, machining parameters, response surface methodology, turning process

Procedia PDF Downloads 155
34728 Students with Disabilities in Today's College Classrooms

Authors: Ashwini Tiwari

Abstract:

This qualitative case study examines students' perceptions of accommodations in higher education institutions. The data were collected from focus groups and one-to-one interviews with 15 students enrolled in a 4-year state university in the southern United States. The data were analyzed using a thematic analysis process. The findings suggest that students perceived that their instructors were willing to accommodate their educational needs. However, the participants expressed concerns about the lack of a formal labeling process in higher education settings, creating a barrier to receiving adequate services to gain meaningful educational experiences.

Keywords: disability, accomodation, services, higher educaiton

Procedia PDF Downloads 66
34727 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion

Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao

Abstract:

Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.

Keywords: image classification, decision fusion, multi-temporal, remote sensing

Procedia PDF Downloads 99
34726 The Impact of Management Competency, Project Team, and Process Design to Corporate Performance through Implementing the Self-Development ERP

Authors: Zeplin Jiwa Husada Tarigan, Sautma Ronni Basana, Widjojo Suprapto

Abstract:

Manufacturing companies in East Java develop their own ERP system or alter the ERP system which is developed by other companies to suit their needs. To make their own system, the companies mostly assign several employees from various departments to create a project team, and the employees are from the departments that are going to utilize the ERP system as the integrated data. The project team decides the making of the ERP system from the preparation stage until the going live implementation process. In designing the business process, the top management is working together with the project team until the project is accomplished. The completion of the ERP projects depends on the project to be undertaken itself, the strategy chosen to complete the project, the work method selection, the measurement system to monitor the project, the evaluation system of the project, and, in the end, the declaration of 'going live' of the ERP project. There is an increase in the business performance for the companies that have implemented the information technology or ERP as they manage to integrate all management functions within their companies. To investigate, some questionnaires are distributed to 100 manufacturing companies, and 90 questionnaires are returned; however, there are only 46 companies that develop their own ERP system, so the response rate is 46%. The result of data analysis using PLS shows that the management competency brings impacts to the project team and the process design. The process design is adjusted to the real process in order to implement the ERP, but it does not bring direct impacts to the business performance. The implementation of ERP brings positive impacts to the company business performance.

Keywords: management competency, project team, process design, ERP implementation, business performance

Procedia PDF Downloads 184
34725 Developing Indicators in System Mapping Process Through Science-Based Visual Tools

Authors: Cristian Matti, Valerie Fowles, Eva Enyedi, Piotr Pogorzelski

Abstract:

The system mapping process can be defined as a knowledge service where a team of facilitators, experts and practitioners facilitate a guided conversation, enable the exchange of information and support an iterative curation process. System mapping processes rely on science-based tools to introduce and simplify a variety of components and concepts of socio-technical systems through metaphors while facilitating an interactive dialogue process to enable the design of co-created maps. System maps work then as “artifacts” to provide information and focus the conversation into specific areas around the defined challenge and related decision-making process. Knowledge management facilitates the curation of that data gathered during the system mapping sessions through practices of documentation and subsequent knowledge co-production for which common practices from data science are applied to identify new patterns, hidden insights, recurrent loops and unexpected elements. This study presents empirical evidence on the application of these techniques to explore mechanisms by which visual tools provide guiding principles to portray system components, key variables and types of data through the lens of climate change. In addition, data science facilitates the structuring of elements that allow the analysis of layers of information through affinity and clustering analysis and, therefore, develop simple indicators for supporting the decision-making process. This paper addresses methodological and empirical elements on the horizontal learning process that integrate system mapping through visual tools, interpretation, cognitive transformation and analysis. The process is designed to introduce practitioners to simple iterative and inclusive processes that create actionable knowledge and enable a shared understanding of the system in which they are embedded.

Keywords: indicators, knowledge management, system mapping, visual tools

Procedia PDF Downloads 161
34724 Router 1X3 - RTL Design and Verification

Authors: Nidhi Gopal

Abstract:

Routing is the process of moving a packet of data from source to destination and enables messages to pass from one computer to another and eventually reach the target machine. A router is a networking device that forwards data packets between computer networks. It is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). This paper mainly emphasizes upon the study of router device, its top level architecture, and how various sub-modules of router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top module.

Keywords: data packets, networking, router, routing

Procedia PDF Downloads 760
34723 Potential of Mineral Composition Reconstruction for Monitoring the Performance of an Iron Ore Concentration Plant

Authors: Maryam Sadeghi, Claude Bazin, Daniel Hodouin, Laura Perez Barnuevo

Abstract:

The performance of a separation process is usually evaluated using performance indices calculated from elemental assays readily available from the chemical analysis laboratory. However, the separation process performance is essentially related to the properties of the minerals that carry the elements and not those of the elements. Since elements or metals can be carried by valuable and gangue minerals in the ore and that each mineral responds differently to a mineral processing method, the use of only elemental assays could lead to erroneous or uncertain conclusions on the process performance. This paper discusses the advantages of using performance indices calculated from minerals content, such as minerals recovery, for process performance assessments. A method is presented that uses elemental assays to estimate the minerals content of the solids in various process streams. The method combines the stoichiometric composition of the minerals and constraints of mass conservation for the minerals through the concentration process to estimate the minerals content from elemental assays. The advantage of assessing a concentration process using mineral based performance indices is illustrated for an iron ore concentration circuit.

Keywords: data reconciliation, iron ore concentration, mineral composition, process performance assessment

Procedia PDF Downloads 177
34722 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 66
34721 Development and Implementation of Curvature Dependent Force Correction Algorithm for the Planning of Forced Controlled Robotic Grinding

Authors: Aiman Alshare, Sahar Qaadan

Abstract:

A curvature dependent force correction algorithm for planning force controlled grinding process with off-line programming flexibility is designed for ABB industrial robot, in order to avoid the manual interface during the process. The machining path utilizes a spline curve fit that is constructed from the CAD data of the workpiece. The fitted spline has a continuity of the second order to assure path smoothness. The implemented algorithm computes uniform forces normal to the grinding surface of the workpiece, by constructing a curvature path in the spatial coordinates using the spline method.

Keywords: ABB industrial robot, grinding process, offline programming, CAD data extraction, force correction algorithm

Procedia PDF Downloads 336
34720 A Survey of 2nd Year Students' Frequent Writing Error and the Effects of Participatory Error Correction Process

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 28 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.05. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 1.0046, 1.1374, 1.297, and 1.0065 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally, the students’ overall satisfaction of the participatory error correction process is in high level (Mean=4.32, S.D.=0.92).

Keywords: coded indirect corrective feedback, participatory error correction process, error treatment, humanities and social sciences

Procedia PDF Downloads 496
34719 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 256
34718 A Review on Existing Challenges of Data Mining and Future Research Perspectives

Authors: Hema Bhardwaj, D. Srinivasa Rao

Abstract:

Technology for analysing, processing, and extracting meaningful data from enormous and complicated datasets can be termed as "big data." The technique of big data mining and big data analysis is extremely helpful for business movements such as making decisions, building organisational plans, researching the market efficiently, improving sales, etc., because typical management tools cannot handle such complicated datasets. Special computational and statistical issues, such as measurement errors, noise accumulation, spurious correlation, and storage and scalability limitations, are brought on by big data. These unique problems call for new computational and statistical paradigms. This research paper offers an overview of the literature on big data mining, its process, along with problems and difficulties, with a focus on the unique characteristics of big data. Organizations have several difficulties when undertaking data mining, which has an impact on their decision-making. Every day, terabytes of data are produced, yet only around 1% of that data is really analyzed. The idea of the mining and analysis of data and knowledge discovery techniques that have recently been created with practical application systems is presented in this study. This article's conclusion also includes a list of issues and difficulties for further research in the area. The report discusses the management's main big data and data mining challenges.

Keywords: big data, data mining, data analysis, knowledge discovery techniques, data mining challenges

Procedia PDF Downloads 81
34717 Character and Evolution of Electronic Waste: A Technologically Developing Country's Experience

Authors: Karen C. Olufokunbi, Odetunji A. Odejobi

Abstract:

The discourse of this paper is the examination of the generation, accumulation and growth of e-waste in a developing country. Images and other data about computer e-waste were collected using a digital camera, 290 copies of questionnaire and three structured interviews using Obafemi Awolowo University (OAU), Ile-Ife, Nigeria environment as a case study. The numerical data were analysed using R data analysis and process tool. Automata-based techniques and Petri net modeling tool were used to design and simulate a computational model for the recovery of saleable materials from e-waste. The R analysis showed that at a 95 percent confidence level, the computer equipment that will be disposed by 2020 will be 417 units. Compared to the 800 units in circulation in 2014, 50 percent of personal computer components will become e-waste. This indicates that personal computer components were in high demand due to their low costs and will be disposed more rapidly when replaced by new computer equipment Also, 57 percent of the respondents discarded their computer e-waste by throwing it into the garbage bin or by dumping it. The simulated model using Coloured Petri net modelling tool for the process showed that the e-waste dynamics is a forward sequential process in the form of a pipeline meaning that an e-waste recovery of saleable materials process occurs in identifiable discrete stages indicating that e-waste will continue to accumulate and grow in volume with time.

Keywords: Coloured Petri net, computational modelling, electronic waste, electronic waste process dynamics

Procedia PDF Downloads 136
34716 Using Gaussian Process in Wind Power Forecasting

Authors: Hacene Benkhoula, Mohamed Badreddine Benabdella, Hamid Bouzeboudja, Abderrahmane Asraoui

Abstract:

The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.

Keywords: wind power, Gaussien process, modelling, forecasting

Procedia PDF Downloads 374
34715 Classification of Poverty Level Data in Indonesia Using the Naïve Bayes Method

Authors: Anung Style Bukhori, Ani Dijah Rahajoe

Abstract:

Poverty poses a significant challenge in Indonesia, requiring an effective analytical approach to understand and address this issue. In this research, we applied the Naïve Bayes classification method to examine and classify poverty data in Indonesia. The main focus is on classifying data using RapidMiner, a powerful data analysis platform. The analysis process involves data splitting to train and test the classification model. First, we collected and prepared a poverty dataset that includes various factors such as education, employment, and health..The experimental results indicate that the Naïve Bayes classification model can provide accurate predictions regarding the risk of poverty. The use of RapidMiner in the analysis process offers flexibility and efficiency in evaluating the model's performance. The classification produces several values to serve as the standard for classifying poverty data in Indonesia using Naive Bayes. The accuracy result obtained is 40.26%, with a moderate recall result of 35.94%, a high recall result of 63.16%, and a low recall result of 38.03%. The precision for the moderate class is 58.97%, for the high class is 17.39%, and for the low class is 58.70%. These results can be seen from the graph below.

Keywords: poverty, classification, naïve bayes, Indonesia

Procedia PDF Downloads 30
34714 Importance of New Policies of Process Management for Internet of Things Based on Forensic Investigation

Authors: Venkata Venugopal Rao Gudlur

Abstract:

The Proposed Policies referred to as “SOP”, on the Internet of Things (IoT) based Forensic Investigation into Process Management is the latest revolution to save time and quick solution for investigators. The forensic investigation process has been developed over many years from time to time it has been given the required information with no policies in investigation processes. This research reveals that the current IoT based forensic investigation into Process Management based is more connected to devices which is the latest revolution and policies. All future development in real-time information on gathering monitoring is evolved with smart sensor-based technologies connected directly to IoT. This paper present conceptual framework on process management. The smart devices are leading the way in terms of automated forensic models and frameworks established by different scholars. These models and frameworks were mostly focused on offering a roadmap for performing forensic operations with no policies in place. These initiatives would bring a tremendous benefit to process management and IoT forensic investigators proposing policies. The forensic investigation process may enhance more security and reduced data losses and vulnerabilities.

Keywords: Internet of Things, Process Management, Forensic Investigation, M2M Framework

Procedia PDF Downloads 77
34713 Secure E-Voting Using Blockchain Technology

Authors: Barkha Ramteke, Sonali Ridhorkar

Abstract:

An election is an important event in all countries. Traditional voting has several drawbacks, including the expense of time and effort required for tallying and counting results, the cost of papers, arrangements, and everything else required to complete a voting process. Many countries are now considering online e-voting systems, but the traditional e-voting systems suffer a lack of trust. It is not known if a vote is counted correctly, tampered or not. A lack of transparency means that the voter has no assurance that his or her vote will be counted as they voted in elections. Electronic voting systems are increasingly using blockchain technology as an underlying storage mechanism to make the voting process more transparent and assure data immutability as blockchain technology grows in popularity. The transparent feature, on the other hand, may reveal critical information about applicants because all system users have the same entitlement to their data. Furthermore, because of blockchain's pseudo-anonymity, voters' privacy will be revealed, and third parties involved in the voting process, such as registration institutions, will be able to tamper with data. To overcome these difficulties, we apply Ethereum smart contracts into blockchain-based voting systems.

Keywords: blockchain, AMV chain, electronic voting, decentralized

Procedia PDF Downloads 110
34712 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data

Authors: Ruchika Malhotra, Megha Khanna

Abstract:

The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.

Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics

Procedia PDF Downloads 393
34711 Secured Transmission and Reserving Space in Images Before Encryption to Embed Data

Authors: G. R. Navaneesh, E. Nagarajan, C. H. Rajam Raju

Abstract:

Nowadays the multimedia data are used to store some secure information. All previous methods allocate a space in image for data embedding purpose after encryption. In this paper, we propose a novel method by reserving space in image with a boundary surrounded before encryption with a traditional RDH algorithm, which makes it easy for the data hider to reversibly embed data in the encrypted images. The proposed method can achieve real time performance, that is, data extraction and image recovery are free of any error. A secure transmission process is also discussed in this paper, which improves the efficiency by ten times compared to other processes as discussed.

Keywords: secure communication, reserving room before encryption, least significant bits, image encryption, reversible data hiding

Procedia PDF Downloads 377
34710 A Further Insight to Foaming in Anaerobic Digester

Authors: Ifeyinwa Rita Kanu, Thomas Aspray, Adebayo J. Adeloye

Abstract:

As a result of the ambiguity and complexity surrounding anaerobic digester foaming, efforts have been made by various researchers to understand the process of anaerobic digester foaming so as to proffer a solution that can be universally applied rather than site specific. All attempts ranging from experimental analysis to comparative review of other process has been futile at explaining explicitly the conditions and process of foaming in anaerobic digester. Studying the available knowledge on foam formation and relating it to anaerobic digester process and operating condition, this study presents a succinct and enhanced understanding of foaming in anaerobic digesters as well as introducing a simple and novel method to identify the onset of anaerobic digester foaming based on analysis of historical data from a field scale system.

Keywords: anaerobic digester, foaming, biogas, surfactant, wastewater

Procedia PDF Downloads 421
34709 Stealth Laser Dicing Process Improvement via Shuffled Frog Leaping Algorithm

Authors: Pongchanun Luangpaiboon, Wanwisa Sarasang

Abstract:

In this paper, a performance of shuffled frog leaping algorithm was investigated on the stealth laser dicing process. Effect of problem on the performance of the algorithm was based on the tolerance of meandering data. From the customer specification it could be less than five microns with the target of zero microns. Currently, the meandering levels are unsatisfactory when compared to the customer specification. Firstly, the two-level factorial design was applied to preliminary study the statistically significant effects of five process variables. In this study one influential process variable is integer. From the experimental results, the new operating condition from the algorithm was superior when compared to the current manufacturing condition.

Keywords: stealth laser dicing process, meandering, meta-heuristics, shuffled frog leaping algorithm

Procedia PDF Downloads 317
34708 ANN Modeling for Cadmium Biosorption from Potable Water Using a Packed-Bed Column Process

Authors: Dariush Jafari, Seyed Ali Jafari

Abstract:

The recommended limit for cadmium concentration in potable water is less than 0.005 mg/L. A continuous biosorption process using indigenous red seaweed, Gracilaria corticata, was performed to remove cadmium from the potable water. The process was conducted under fixed conditions and the breakthrough curves were achieved for three consecutive sorption-desorption cycles. A modeling based on Artificial Neural Network (ANN) was employed to fit the experimental breakthrough data. In addition, a simplified semi empirical model, Thomas, was employed for this purpose. It was found that ANN well described the experimental data (R2>0.99) while the Thomas prediction were a bit less successful with R2>0.97. The adjusted design parameters using the nonlinear form of Thomas model was in a good agreement with the experimentally obtained ones. The results approve the capability of ANN to predict the cadmium concentration in potable water.

Keywords: ANN, biosorption, cadmium, packed-bed, potable water

Procedia PDF Downloads 400
34707 Designing Information Systems in Education as Prerequisite for Successful Management Results

Authors: Vladimir Simovic, Matija Varga, Tonco Marusic

Abstract:

This research paper shows matrix technology models and examples of information systems in education (in the Republic of Croatia and in the Germany) in support of business, education (when learning and teaching) and e-learning. Here we researched and described the aims and objectives of the main process in education and technology, with main matrix classes of data. In this paper, we have example of matrix technology with detailed description of processes related to specific data classes in the processes of education and an example module that is support for the process: ‘Filling in the directory and the diary of work’ and ‘evaluation’. Also, on the lower level of the processes, we researched and described all activities which take place within the lower process in education. We researched and described the characteristics and functioning of modules: ‘Fill the directory and the diary of work’ and ‘evaluation’. For the analysis of the affinity between the aforementioned processes and/or sub-process we used our application model created in Visual Basic, which was based on the algorithm for analyzing the affinity between the observed processes and/or sub-processes.

Keywords: designing, education management, information systems, matrix technology, process affinity

Procedia PDF Downloads 417
34706 Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data

Authors: Sašo Pečnik, Borut Žalik

Abstract:

This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders.

Keywords: filtering, graphics, level-of-details, LiDAR, real-time visualization

Procedia PDF Downloads 275
34705 Vibration-Based Data-Driven Model for Road Health Monitoring

Authors: Guru Prakash, Revanth Dugalam

Abstract:

A road’s condition often deteriorates due to harsh loading such as overload due to trucks, and severe environmental conditions such as heavy rain, snow load, and cyclic loading. In absence of proper maintenance planning, this results in potholes, wide cracks, bumps, and increased roughness of roads. In this paper, a data-driven model will be developed to detect these damages using vibration and image signals. The key idea of the proposed methodology is that the road anomaly manifests in these signals, which can be detected by training a machine learning algorithm. The use of various machine learning techniques such as the support vector machine and Radom Forest method will be investigated. The proposed model will first be trained and tested with artificially simulated data, and the model architecture will be finalized by comparing the accuracies of various models. Once a model is fixed, the field study will be performed, and data will be collected. The field data will be used to validate the proposed model and to predict the future road’s health condition. The proposed will help to automate the road condition monitoring process, repair cost estimation, and maintenance planning process.

Keywords: SVM, data-driven, road health monitoring, pot-hole

Procedia PDF Downloads 56