Search results for: internet data center.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8184

Search results for: internet data center.

7554 A Two Level Load Balancing Approach for Cloud Environment

Authors: Anurag Jain, Rajneesh Kumar

Abstract:

Cloud computing is the outcome of rapid growth of internet. Due to elastic nature of cloud computing and unpredictable behavior of user, load balancing is the major issue in cloud computing paradigm. An efficient load balancing technique can improve the performance in terms of efficient resource utilization and higher customer satisfaction. Load balancing can be implemented through task scheduling, resource allocation and task migration. Various parameters to analyze the performance of load balancing approach are response time, cost, data processing time and throughput. This paper demonstrates a two level load balancer approach by combining join idle queue and join shortest queue approach. Authors have used cloud analyst simulator to test proposed two level load balancer approach. The results are analyzed and compared with the existing algorithms and as observed, proposed work is one step ahead of existing techniques.

Keywords: Cloud Analyst, Cloud Computing, Join Idle Queue, Join Shortest Queue, Load balancing, Task Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
7553 Latent Topic Based Medical Data Classification

Authors: Jian-hua Yeh, Shi-yi Kuo

Abstract:

This paper discusses the classification process for medical data. In this paper, we use the data from ACM KDDCup 2008 to demonstrate our classification process based on latent topic discovery. In this data set, the target set and outliers are quite different in their nature: target set is only 0.6% size in total, while the outliers consist of 99.4% of the data set. We use this data set as an example to show how we dealt with this extremely biased data set with latent topic discovery and noise reduction techniques. Our experiment faces two major challenge: (1) extremely distributed outliers, and (2) positive samples are far smaller than negative ones. We try to propose a suitable process flow to deal with these issues and get a best AUC result of 0.98.

Keywords: classification, latent topics, outlier adjustment, feature scaling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
7552 A Force-directed Graph Drawing based on the Hierarchical Individual Timestep Method

Authors: T. Matsubayashi, T. Yamada

Abstract:

In this paper, we propose a fast and efficient method for drawing very large-scale graph data. The conventional force-directed method proposed by Fruchterman and Rheingold (FR method) is well-known. It defines repulsive forces between every pair of nodes and attractive forces between connected nodes on a edge and calculates corresponding potential energy. An optimal layout is obtained by iteratively updating node positions to minimize the potential energy. Here, the positions of the nodes are updated every global timestep at the same time. In the proposed method, each node has its own individual time and time step, and nodes are updated at different frequencies depending on the local situation. The proposed method is inspired by the hierarchical individual time step method used for the high accuracy calculations for dense particle fields such as star clusters in astrophysical dynamics. Experiments show that the proposed method outperforms the original FR method in both speed and accuracy. We implement the proposed method on the MDGRAPE-3 PCI-X special purpose parallel computer and realize a speed enhancement of several hundred times.

Keywords: visualization, graph drawing, Internet Map

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
7551 Data Collection in Hospital Emergencies: A Questionnaire Survey

Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala

Abstract:

Many methods are used to collect data like questionnaires, surveys, focus group interviews. Or the collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses. In this context, and to overcome the aforementioned problems, we suggest in this paper an approach to achieve the collection of relevant data, by carrying out a large-scale questionnaire-based survey. We have been able to collect good quality, consistent and practical data on hospital emergencies to improve emergency services in hospitals, especially in the case of epidemics or pandemics.

Keywords: Data collection, survey, database, data analysis, hospital emergencies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 642
7550 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning

Authors: Mayowa A. Sofowora, Seraphim D. Eyono Obono

Abstract:

The use of information and communication technologies such as computers, mobile phones and the Internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: Objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey: and the second type of objectives is achieved through a survey of high school teachers from the ILembe and UMgungundlovu districts in the KwaZulu-Natal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaires. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.

Keywords: Attribution, Cellphones, E-learning, Reliability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
7549 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data

Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop

Abstract:

Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.

Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
7548 Extraction of Data from Web Pages: A Vision Based Approach

Authors: P. S. Hiremath, Siddu P. Algur

Abstract:

With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.

Keywords: Web data records, web data regions, web mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
7547 Visual-Graphical Methods for Exploring Longitudinal Data

Authors: H. W. Ker

Abstract:

Longitudinal data typically have the characteristics of changes over time, nonlinear growth patterns, between-subjects variability, and the within errors exhibiting heteroscedasticity and dependence. The data exploration is more complicated than that of cross-sectional data. The purpose of this paper is to organize/integrate of various visual-graphical techniques to explore longitudinal data. From the application of the proposed methods, investigators can answer the research questions include characterizing or describing the growth patterns at both group and individual level, identifying the time points where important changes occur and unusual subjects, selecting suitable statistical models, and suggesting possible within-error variance.

Keywords: Data exploration, exploratory analysis, HLMs/LMEs, longitudinal data, visual-graphical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085
7546 Separating Permanent and Induced Magnetic Signature: A Simple Approach

Authors: O. J. G. Somsen, G. P. M. Wagemakers

Abstract:

Magnetic signature detection provides sensitive detection of metal objects, especially in the natural environment. Our group is developing a tabletop setup for magnetic signatures of various small and model objects. A particular issue is the separation of permanent and induced magnetization. While the latter depends only on the composition and shape of the object, the former also depends on the magnetization history. With common deperming techniques, a significant permanent signature may still remain, which confuses measurements of the induced component. We investigate a basic technique of separating the two. Measurements were done by moving the object along an aluminum rail while the three field components are recorded by a detector attached near the center. This is done first with the rail parallel to the Earth magnetic field and then with anti-parallel orientation. The reversal changes the sign of the induced- but not the permanent magnetization so that the two can be separated. Our preliminary results on a small iron block show excellent reproducibility. A considerable permanent magnetization was indeed present, resulting in a complex asymmetric signature. After separation, a much more symmetric induced signature was obtained that can be studied in detail and compared with theoretical calculations.

Keywords: Magnetic signature, data analysis, magnetization, deperming techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067
7545 A Materialized Approach to the Integration of XML Documents: the OSIX System

Authors: H. Ahmad, S. Kermanshahani, A. Simonet, M. Simonet

Abstract:

The data exchanged on the Web are of different nature from those treated by the classical database management systems; these data are called semi-structured data since they do not have a regular and static structure like data found in a relational database; their schema is dynamic and may contain missing data or types. Therefore, the needs for developing further techniques and algorithms to exploit and integrate such data, and extract relevant information for the user have been raised. In this paper we present the system OSIX (Osiris based System for Integration of XML Sources). This system has a Data Warehouse model designed for the integration of semi-structured data and more precisely for the integration of XML documents. The architecture of OSIX relies on the Osiris system, a DL-based model designed for the representation and management of databases and knowledge bases. Osiris is a viewbased data model whose indexing system supports semantic query optimization. We show that the problem of query processing on a XML source is optimized by the indexing approach proposed by Osiris.

Keywords: Data integration, semi-structured data, views, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580
7544 Parental Attitudes as a Predictor of Cyber Bullying among Primary School Children

Authors: Bülent Dilmaç, Didem Aydoğan

Abstract:

Problem Statement:Rapid technological developments of the 21st century have advanced our daily lives in various ways. Particularly in education, students frequently utilize technological resources to aid their homework and to access information. listen to radio or watch television (26.9 %) and e-mails (34.2 %) [26]. Not surprisingly, the increase in the use of technologies also resulted in an increase in the use of e-mail, instant messaging, chat rooms, mobile phones, mobile phone cameras and web sites by adolescents to bully peers. As cyber bullying occurs in the cyber space, lesser access to technologies would mean lesser cyber-harm. Therefore, the frequency of technology use is a significant predictor of cyber bullying and cyber victims. Cyber bullies try to harm the victim using various media. These tools include sending derogatory texts via mobile phones, sending threatening e-mails and forwarding confidential emails to everyone on the contacts list. Another way of cyber bullying is to set up a humiliating website and invite others to post comments. In other words, cyber bullies use e-mail, chat rooms, instant messaging, pagers, mobile texts and online voting tools to humiliate and frighten others and to create a sense of helplessness. No matter what type of bullying it is, it negatively affects its victims. Children who bully exhibit more emotional inhibition and attribute themselves more negative self-statements compared to non-bullies. Students whose families are not sympathetic and who receive lower emotional support are more prone to bully their peers. Bullies have authoritarian families and do not get along well with them. The family is the place where the children-s physical, social and psychological needs are satisfied and where their personalities develop. As the use of the internet became prevalent so did parents- restrictions on their children-s internet use. However, parents are unaware of the real harm. Studies that explain the relationship between parental attitudes and cyber bullying are scarce in literature. Thus, this study aims to investigate the relationship between cyber bullying and parental attitudes in the primary school. Purpose of Study: This study aimed to investigate the relationship between cyber bullying and parental attitudes. A second aim was to determine whether parental attitudes could predict cyber bullying and if so which variables could predict it significantly. Methods:The study had a cross-sectional and relational survey model. A demographics information form, questions about cyber bullying and a Parental Attitudes Inventory were conducted with a total of 346 students (189 females and 157 males) registered at various primary schools. Data was analysed by multiple regression analysis using the software package SPSS 16.

Keywords: Cyber bullying, cyber victim, parental attitudes, primary school students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3852
7543 Some Issues with Extension of an HPC Cluster

Authors: Pil Seong Park

Abstract:

Homemade HPC clusters are widely used in many small labs, because they are easy to build and cost-effective. Even though incremental growth is an advantage of clusters, it results in heterogeneous systems anyhow. Instead of adding new nodes to the cluster, we can extend clusters to include some other Internet servers working independently on the same LAN, so that we can make use of their idle times, especially during the night. However extension across a firewall raises some security problems with NFS. In this paper, we propose a method to solve such a problem using SSH tunneling, and suggest a modified structure of the cluster that implements it.

Keywords: Extension of HPC clusters, Security, NFS, SSH tunneling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
7542 Data-Driven Decision-Making in Digital Entrepreneurship

Authors: Abeba Nigussie Turi, Xiangming Samuel Li

Abstract:

Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.

Keywords: Startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
7541 Classifying Bio-Chip Data using an Ant Colony System Algorithm

Authors: Minsoo Lee, Yearn Jeong Kim, Yun-mi Kim, Sujeung Cheong, Sookyung Song

Abstract:

Bio-chips are used for experiments on genes and contain various information such as genes, samples and so on. The two-dimensional bio-chips, in which one axis represent genes and the other represent samples, are widely being used these days. Instead of experimenting with real genes which cost lots of money and much time to get the results, bio-chips are being used for biological experiments. And extracting data from the bio-chips with high accuracy and finding out the patterns or useful information from such data is very important. Bio-chip analysis systems extract data from various kinds of bio-chips and mine the data in order to get useful information. One of the commonly used methods to mine the data is classification. The algorithm that is used to classify the data can be various depending on the data types or number characteristics and so on. Considering that bio-chip data is extremely large, an algorithm that imitates the ecosystem such as the ant algorithm is suitable to use as an algorithm for classification. This paper focuses on finding the classification rules from the bio-chip data using the Ant Colony algorithm which imitates the ecosystem. The developed system takes in consideration the accuracy of the discovered rules when it applies it to the bio-chip data in order to predict the classes.

Keywords: Ant Colony System, DNA chip data, Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
7540 Trust and Reliability for Public Sector Data

Authors: Klaus Stranacher, Vesna Krnjic, Thomas Zefferer

Abstract:

The public sector holds large amounts of data of various areas such as social affairs, economy, or tourism. Various initiatives such as Open Government Data or the EU Directive on public sector information aim to make these data available for public and private service providers. Requirements for the provision of public sector data are defined by legal and organizational frameworks. Surprisingly, the defined requirements hardly cover security aspects such as integrity or authenticity. In this paper we discuss the importance of these missing requirements and present a concept to assure the integrity and authenticity of provided data based on electronic signatures. We show that our concept is perfectly suitable for the provisioning of unaltered data. We also show that our concept can also be extended to data that needs to be anonymized before provisioning by incorporating redactable signatures. Our proposed concept enhances trust and reliability of provided public sector data.

Keywords: Trusted Public Sector Data, Integrity, Authenticity, Reliability, Redactable Signatures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
7539 Analysis of Relation between Unlabeled and Labeled Data to Self-Taught Learning Performance

Authors: Ekachai Phaisangittisagul, Rapeepol Chongprachawat

Abstract:

Obtaining labeled data in supervised learning is often difficult and expensive, and thus the trained learning algorithm tends to be overfitting due to small number of training data. As a result, some researchers have focused on using unlabeled data which may not necessary to follow the same generative distribution as the labeled data to construct a high-level feature for improving performance on supervised learning tasks. In this paper, we investigate the impact of the relationship between unlabeled and labeled data for classification performance. Specifically, we will apply difference unlabeled data which have different degrees of relation to the labeled data for handwritten digit classification task based on MNIST dataset. Our experimental results show that the higher the degree of relation between unlabeled and labeled data, the better the classification performance. Although the unlabeled data that is completely from different generative distribution to the labeled data provides the lowest classification performance, we still achieve high classification performance. This leads to expanding the applicability of the supervised learning algorithms using unsupervised learning.

Keywords: Autoencoder, high-level feature, MNIST dataset, selftaught learning, supervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
7538 Towards Development of Solution for Business Process-Oriented Data Analysis

Authors: M. Klimavicius

Abstract:

This paper proposes a modeling methodology for the development of data analysis solution. The Author introduce the approach to address data warehousing issues at the at enterprise level. The methodology covers the process of the requirements eliciting and analysis stage as well as initial design of data warehouse. The paper reviews extended business process model, which satisfy the needs of data warehouse development. The Author considers that the use of business process models is necessary, as it reflects both enterprise information systems and business functions, which are important for data analysis. The Described approach divides development into three steps with different detailed elaboration of models. The Described approach gives possibility to gather requirements and display them to business users in easy manner.

Keywords: Data warehouse, data analysis, business processmanagement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
7537 Assessment of the Efficacy of Oral Vaccination of Wild Canids and Stray Dogs against Rabies in Azerbaijan

Authors: E. N. Hasanov, K. Y. Yusifova, M. A. Ali

Abstract:

Rabies is a zoonotic disease that causes acute encephalitis in domestic and wild carnivores. The goal of this investigation was to analyze the data on oral vaccination of wild canids and stray dogs in Azerbaijan. Before the start of vaccination campaign conducted by the IDEA (International Dialogue for Environmental Action) Animal Care Center (IACC), all rabies cases in Azerbaijan for the period of 2017-2020 were analyzed. So, 30 regions for oral immunization with the Rabadrop vaccine were selected. In total, 95.9 thousand doses of baits were scattered in 30 regions, 970 (0.97%) remained intact. In addition, a campaign to sterilize and vaccinate stray dogs and cats undoubtedly had a positive impact on reducing the dynamics of rabies incidence. During the period 2017-2020, 2,339 dogs and 2,962 cats were sterilized and vaccinated under this program. It can be noted that the risk of rabies infection can be reduced through special preventive measures against disease reservoirs, which include oral immunization of wild and stray animals.

Keywords: Rabies, vaccination, oral immunization, wild canids, stray dogs, vaccine, disease reservoirs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468
7536 Real E-Government, Real Convenience

Authors: M. Kargar, F.Fartash, T. Saderi, M. Abdar-e Bakhshayesh

Abstract:

In this paper we have suggested a new system for egovernment. In this method a government can design a precise and perfect system to control people and organizations by using five major documents. These documents contain the important information of each member of a society and help all organizations to do their informatics tasks through them. This information would be available by only a national code and a secure program would support it. The suggested system can give a good awareness to the society and help it be managed correctly.

Keywords: E-Government, Internet, Web-Based System, Society.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
7535 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata

Abstract:

The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.

Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854
7534 Easy Shopping by Electronic Credit

Authors: M. Kargar, A. Isazadeh, F. Fartash, T. Saderi

Abstract:

In this paper we suggest a method for setting electronic credits for the customers. In this method banks and market-sites help each other to make doing large shopping through internet so easy. By developing this system, the people who have less money to buy most of the things they want, become able to buy all of them just through a credit. This credit is given by market-sites through a banking control on it. The method suggested can stop being imprisoned because of banking debts.

Keywords: E-Business, E-Credit, Market-site, Buy-site, Bank, E-Commerce.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1481
7533 Preliminary Overview of Data Mining Technology for Knowledge Management System in Institutions of Higher Learning

Authors: Muslihah Wook, Zawiyah M. Yusof, Mohd Zakree Ahmad Nazri

Abstract:

Data mining has been integrated into application systems to enhance the quality of the decision-making process. This study aims to focus on the integration of data mining technology and Knowledge Management System (KMS), due to the ability of data mining technology to create useful knowledge from large volumes of data. Meanwhile, KMS vitally support the creation and use of knowledge. The integration of data mining technology and KMS are popularly used in business for enhancing and sustaining organizational performance. However, there is a lack of studies that applied data mining technology and KMS in the education sector; particularly students- academic performance since this could reflect the IHL performance. Realizing its importance, this study seeks to integrate data mining technology and KMS to promote an effective management of knowledge within IHLs. Several concepts from literature are adapted, for proposing the new integrative data mining technology and KMS framework to an IHL.

Keywords: Data mining, Institutions of Higher Learning, Knowledge Management System, Students' academic performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2133
7532 Towards a Secure Storage in Cloud Computing

Authors: Mohamed Elkholy, Ahmed Elfatatry

Abstract:

Cloud computing has emerged as a flexible computing paradigm that reshaped the Information Technology map. However, cloud computing brought about a number of security challenges as a result of the physical distribution of computational resources and the limited control that users have over the physical storage. This situation raises many security challenges for data integrity and confidentiality as well as authentication and access control. This work proposes a security mechanism for data integrity that allows a data owner to be aware of any modification that takes place to his data. The data integrity mechanism is integrated with an extended Kerberos authentication that ensures authorized access control. The proposed mechanism protects data confidentiality even if data are stored on an untrusted storage. The proposed mechanism has been evaluated against different types of attacks and proved its efficiency to protect cloud data storage from different malicious attacks.

Keywords: Access control, data integrity, data confidentiality, Kerberos authentication, cloud security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
7531 Thailand National Biodiversity Database System with webMathematica and Google Earth

Authors: W. Katsarapong, W. Srisang, K. Jaroensutasinee, M. Jaroensutasinee

Abstract:

National Biodiversity Database System (NBIDS) has been developed for collecting Thai biodiversity data. The goal of this project is to provide advanced tools for querying, analyzing, modeling, and visualizing patterns of species distribution for researchers and scientists. NBIDS data record two types of datasets: biodiversity data and environmental data. Biodiversity data are specie presence data and species status. The attributes of biodiversity data can be further classified into two groups: universal and projectspecific attributes. Universal attributes are attributes that are common to all of the records, e.g. X/Y coordinates, year, and collector name. Project-specific attributes are attributes that are unique to one or a few projects, e.g., flowering stage. Environmental data include atmospheric data, hydrology data, soil data, and land cover data collecting by using GLOBE protocols. We have developed webbased tools for data entry. Google Earth KML and ArcGIS were used as tools for map visualization. webMathematica was used for simple data visualization and also for advanced data analysis and visualization, e.g., spatial interpolation, and statistical analysis. NBIDS will be used by park rangers at Khao Nan National Park, and researchers.

Keywords: GLOBE protocol, Biodiversity, Database System, ArcGIS, Google Earth and webMathematica.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
7530 Evaluation of Clustering Based on Preprocessing in Gene Expression Data

Authors: Seo Young Kim, Toshimitsu Hamasaki

Abstract:

Microarrays have become the effective, broadly used tools in biological and medical research to address a wide range of problems, including classification of disease subtypes and tumors. Many statistical methods are available for analyzing and systematizing these complex data into meaningful information, and one of the main goals in analyzing gene expression data is the detection of samples or genes with similar expression patterns. In this paper, we express and compare the performance of several clustering methods based on data preprocessing including strategies of normalization or noise clearness. We also evaluate each of these clustering methods with validation measures for both simulated data and real gene expression data. Consequently, clustering methods which are common used in microarray data analysis are affected by normalization and degree of noise and clearness for datasets.

Keywords: Gene expression, clustering, data preprocessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
7529 Validity of Universe Structure Conception as Nested Vortexes

Authors: Khaled M. Nabil

Abstract:

This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.

Keywords: Astrophysics, cosmology, particles’ structure model, particles’ forces, vortex dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 842
7528 A Noble Flow Rate Control based on Leaky Bucket Method for Multi-Media OBS Networks

Authors: Kentaro Miyoko, Yoshihiko Mori, Yugo Ikeda, Yoshihiro Nishino, Yong-Bok Choi, Hiromi Okada

Abstract:

Optical burst switching (OBS) has been proposed to realize the next generation Internet based on the wavelength division multiplexing (WDM) network technologies. In the OBS, the burst contention is one of the major problems. The deflection routing has been designed for resolving the problem. However, the deflection routing becomes difficult to prevent from the burst contentions as the network load becomes high. In this paper, we introduce a flow rate control methods to reduce burst contentions. We propose new flow rate control methods based on the leaky bucket algorithm and deflection routing, i.e. separate leaky bucket deflection method, and dynamic leaky bucket deflection method. In proposed methods, edge nodes which generate data bursts carry out the flow rate control protocols. In order to verify the effectiveness of the flow rate control in OBS networks, we show that the proposed methods improve the network utilization and reduce the burst loss probability through computer simulations.

Keywords: Optical burst switching, OBS, flow rate control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
7527 Analysis of Organizational Factors Effect on Performing Electronic Commerce Strategy: A Case Study of the Namakin Food Industry

Authors: Seyed Hamidreza Hejazi Dehghani, Neda Khounsari

Abstract:

Quick growth of electronic commerce in developed countries means that developing nations must change in their commerce strategies fundamentally. Most organizations are aware of the impact of the Internet and e-Commerce on the future of their firm, and thus, they have to focus on organizational factors that have an effect on the deployment of an e-Commerce strategy. In this situation, it is essential to identify organizational factors such as the organizational culture, human resources, size, structure and product/service that impact an e-commerce strategy. Accordingly, this research specifies the effects of organizational factors on applying an e-commerce strategy in the Namakin food industry. The statistical population of this research is 95 managers and employees. Cochran's formula is used for determination of the sample size that is 77 of the statistical population. Also, SPSS and Smart PLS software were utilized for analyzing the collected data. The results of hypothesis testing show that organizational factors have positive and significant effects of applying an e-Commerce strategy. On the other hand, sub-hypothesizes show that effectiveness of the organizational culture and size criteria were rejected and other sub-hypothesis were accepted.

Keywords: Electronic commerce, organizational factors, attitude of managers, organizational readiness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 982
7526 A Network Traffic Prediction Algorithm Based On Data Mining Technique

Authors: D. Prangchumpol

Abstract:

This paper is a description approach to predict incoming and outgoing data rate in network system by using association rule discover, which is one of the data mining techniques. Information of incoming and outgoing data in each times and network bandwidth are network performance parameters, which needed to solve in the traffic problem. Since congestion and data loss are important network problems. The result of this technique can predicted future network traffic. In addition, this research is useful for network routing selection and network performance improvement.

Keywords: Traffic prediction, association rule, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3659
7525 Fuzzy Processing of Uncertain Data

Authors: Petr Morávek, Miloš Šeda

Abstract:

In practice, we often come across situations where it is necessary to make decisions based on incomplete or uncertain data. In control systems it may be due to the unknown exact mathematical model, or its excessive complexity (e.g. nonlinearity) when it is necessary to simplify it, respectively, to solve it using a rule base. In the case of databases, searching data we compare a similarity measure with of the requirements of the selection with stored data, where both the select query and the data itself may contain vague terms, for example in the form of linguistic qualifiers. In this paper, we focus on the processing of uncertain data in databases and demonstrate it on the example multi-criteria decision making in the selection of variants, specified by higher number of technical parameters.

Keywords: fuzzy logic, linguistic variable, multicriteria decision

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1411