Search results for: Data collection
7312 Identifying E-Learning Components at North-West University, Mafikeng Campus
Authors: Sylvia Tumelo Nthutang, Nehemiah Mavetera
Abstract:
Educational institutions are under pressure from their competitors. Regulators and community groups need educational institutions to adopt appropriate business and organizational practices. Globally, educational institutions are now using e-learning as the best teaching and learning approach. E-learning is becoming the center of attention to the learning institutions, educational systems and software inventors. North-West University (NWU) is currently using eFundi, a Learning Management System (LMS). LMS are all information systems and procedures that adds value to students learning and support the learning material in text or any multimedia files. With various e-learning tools, students would be able to access all the materials related to the course in electronic copies. The study was tasked with identifying the e-learning components at the NWU, Mafikeng campus. Quantitative research methodology was considered in data collection and descriptive statistics for data analysis. The Activity Theory (AT) was used as a theory to guide the study. AT outlines the limitations amongst e-learning at the macro-organizational level (plan, guiding principle, campus-wide solutions) and micro-organization (daily functioning practice, collaborative transformation, specific adaptation). On a technological environment, AT gives people an opportunity to change from concentrating on computers as an area of concern but also understand that technology is part of human activities. The findings have identified the university’s current IT tools and knowledge on e-learning elements. It was recommended that university should consider buying computer resources that consumes less power and practice e-learning effectively.
Keywords: E-learning, information and communication technology, teaching, and virtual learning environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10807311 Integration of Multi-Source Data to Monitor Coral Biodiversity
Authors: K. Jitkue, W. Srisang, C. Yaiprasert, K. Jaroensutasinee, M. Jaroensutasinee
Abstract:
This study aims at using multi-source data to monitor coral biodiversity and coral bleaching. We used coral reef at Racha Islands, Phuket as a study area. There were three sources of data: coral diversity, sensor based data and satellite data.Keywords: Coral reefs, Remote sensing, Sea surfacetemperatue, Satellite imagery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15537310 Tourism Competitiveness Survey Analysis of Serbian Ski Resorts
Authors: Marijana Pantić, Saša Milijić
Abstract:
In Serbia as a continental country, the tourism industry relies on city-break, spa and mountain tourism, where ski resorts have primacy during the winter season. Even though the number of tourists has recently increased, the share of domestic tourists remained predominant. It is also noticed that tourists from Serbia eagerly travel abroad, which was so far researched in the context of summer holidays but not in the framework of ski resorts. Therefore, this paper examines the competitiveness of ski resorts in Serbia from the perspective of domestic tourists. A survey was used as a data collection method, covering various competitiveness dimensions. The aim is to recognize the main motives of consumers when choosing a ski resort in Serbia or abroad. The results showed that the choices of Serbian tourists are predominantly shaped by the cost of an offer – of accommodation above all others. They are attentive by estimating the value for money, which is the most common reason to choose a ski resort abroad over a domestic one. The crowd at ski resorts and ski runs appears to be a result of unbalanced accommodation capacities on the one hand and ski infrastructure on the other, which is currently the most notable competitiveness drawback of ski resorts in Serbia.
Keywords: Mountain tourism, Serbia, ski resorts, tourism competitiveness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6957309 Decision Support System Based on Data Warehouse
Authors: Yang Bao, LuJing Zhang
Abstract:
Typical Intelligent Decision Support System is 4-based, its design composes of Data Warehouse, Online Analytical Processing, Data Mining and Decision Supporting based on models, which is called Decision Support System Based on Data Warehouse (DSSBDW). This way takes ETL,OLAP and DM as its implementing means, and integrates traditional model-driving DSS and data-driving DSS into a whole. For this kind of problem, this paper analyzes the DSSBDW architecture and DW model, and discusses the following key issues: ETL designing and Realization; metadata managing technology using XML; SQL implementing, optimizing performance, data mapping in OLAP; lastly, it illustrates the designing principle and method of DW in DSSBDW.
Keywords: Decision Support System, Data Warehouse, Data Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38637308 The Relationship of Building Information Modeling (BIM) Capability in Quantity Surveying Practice and Project Performance
Authors: P. F. Wong, H. Salleh, F. A. Rahim
Abstract:
The adoption of building information modeling (BIM) is increasing in the construction industry. However, quantity surveyors are slow in adoption compared to other professions due to lack of awareness of the BIM’s potential in their profession. It is still unclear on how BIM application can enhance quantity surveyors’ work performance and project performance. The aim of this research is to identify the capabilities of BIM in quantity surveying practices and examine the relationship between BIM capabilities and project performance. Questionnaire survey and interviews were adopted for data collection. Literature reviews identified there are eleven BIM capabilities in quantity surveying practice. Questionnaire results showed that there are several BIM capabilities significantly correlated with project performance in time, cost and quality aspects and the results were validated through interviews. These findings show that BIM has the capabilities to enhance quantity surveyors’ performances and subsequently improved project performance.
Keywords: Building information modeling (BIM), quantity surveyors, capability, project performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72567307 Judicial Review of Indonesia's Position as the First Archipelagic State to implement the Traffic Separation Scheme to Establish Maritime Safety and Security
Authors: Rosmini Yanti, Safira Aviolita, Marsetio
Abstract:
Indonesia has several straits that are very important as a shipping lane, including the Sunda Strait and the Lombok Strait, which are the part of the Indonesian Archipelagic Sea Lane (IASL). An increase in traffic on the Marine Archipelago makes the task of monitoring sea routes increasingly difficult. Indonesia has proposed the establishment of a Traffic Separation Scheme (TSS) in the Sunda Strait and the Lombok Strait and the country now has the right to be able to conceptualize the TSS as well as the obligation to regulate it. Indonesia has the right to maintain national safety and sovereignty. In setting the TSS, Indonesia needs to issue national regulations that are in accordance with international law and the general provisions of the IMO (International Maritime Organization) can then be used as guidelines for maritime safety and security in the Sunda Strait and the Lombok Strait. The research method used is a qualitative method with the concept of linguistic and visual data collection. The source of the data is the analysis of documents and regulations. The results show that the determination of TSS was justified by International Law, in accordance with article 22, article 41, and article 53 of the United Nations Convention on the Law of the Sea (UNCLOS) 1982. The determination of TSS by the Indonesian government would be in accordance with COLREG (International Convention on Preventing Collisions at Sea) 10, which has been designed to follow IASL. Thus, TSS can provide a function as a safety and monitoring medium to minimize ship accidents or collisions, including the warship and aircraft of other countries that cross the IASL.
Keywords: Archipelago State, maritime law, maritime security, traffic separation scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7477306 A New History Based Method to Handle the Recurring Concept Shifts in Data Streams
Authors: Hossein Morshedlou, Ahmad Abdollahzade Barforoush
Abstract:
Recent developments in storage technology and networking architectures have made it possible for broad areas of applications to rely on data streams for quick response and accurate decision making. Data streams are generated from events of real world so existence of associations, which are among the occurrence of these events in real world, among concepts of data streams is logical. Extraction of these hidden associations can be useful for prediction of subsequent concepts in concept shifting data streams. In this paper we present a new method for learning association among concepts of data stream and prediction of what the next concept will be. Knowing the next concept, an informed update of data model will be possible. The results of conducted experiments show that the proposed method is proper for classification of concept shifting data streams.Keywords: Data Stream, Classification, Concept Shift, History.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12787305 Incremental Learning of Independent Topic Analysis
Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda
Abstract:
In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.Keywords: Text mining, topic extraction, independent, incremental, independent component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10607304 A Framework for Data Mining Based Multi-Agent: An Application to Spatial Data
Authors: H. Baazaoui Zghal, S. Faiz, H. Ben Ghezala
Abstract:
Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.
Keywords: Databases, data mining, multi-agent, spatial datamart.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20457303 Latent Topic Based Medical Data Classification
Authors: Jian-hua Yeh, Shi-yi Kuo
Abstract:
This paper discusses the classification process for medical data. In this paper, we use the data from ACM KDDCup 2008 to demonstrate our classification process based on latent topic discovery. In this data set, the target set and outliers are quite different in their nature: target set is only 0.6% size in total, while the outliers consist of 99.4% of the data set. We use this data set as an example to show how we dealt with this extremely biased data set with latent topic discovery and noise reduction techniques. Our experiment faces two major challenge: (1) extremely distributed outliers, and (2) positive samples are far smaller than negative ones. We try to propose a suitable process flow to deal with these issues and get a best AUC result of 0.98.
Keywords: classification, latent topics, outlier adjustment, feature scaling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16427302 Mobile Collaboration Learning Technique on Students in Developing Nations
Authors: Amah Nnachi Lofty, Oyefeso Olufemi, Ibiam Udu Ama
Abstract:
New and more powerful communications technologies continue to emerge at a rapid pace and their uses in education are widespread and the impact remarkable in the developing societies. This study investigates Mobile Collaboration Learning Technique (MCLT) on learners’ outcome among students in tertiary institutions of developing nations (a case of Nigeria students). It examines the significance of retention achievement scores of students taught using mobile collaboration and conventional method. The sample consisted of 120 students using Stratified random sampling method. Five research questions and hypotheses were formulated, and tested at 0.05 level of significance. A student achievement test (SAT) was made of 40 items of multiple-choice objective type, developed and validated for data collection by professionals. The SAT was administered to students as pre-test and post-test. The data were analyzed using t-test statistic to test the hypotheses. The result indicated that students taught using MCLT performed significantly better than their counterparts using the conventional method of instruction. Also, there was no significant difference in the post-test performance scores of male and female students taught using MCLT. Based on the findings, the following submissions was made that: Mobile collaboration system be encouraged in the institutions to boost knowledge sharing among learners, workshop and training should be organized to train teachers on the use of this technique, schools and government should consistently align curriculum standard to trends of technological dictates and formulate policies and procedures towards responsible use of MCLT.Keywords: Education, communication, learning, mobile collaboration, technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18147301 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data
Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop
Abstract:
Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20407300 Extraction of Data from Web Pages: A Vision Based Approach
Authors: P. S. Hiremath, Siddu P. Algur
Abstract:
With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.
Keywords: Web data records, web data regions, web mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19017299 Visual-Graphical Methods for Exploring Longitudinal Data
Authors: H. W. Ker
Abstract:
Longitudinal data typically have the characteristics of changes over time, nonlinear growth patterns, between-subjects variability, and the within errors exhibiting heteroscedasticity and dependence. The data exploration is more complicated than that of cross-sectional data. The purpose of this paper is to organize/integrate of various visual-graphical techniques to explore longitudinal data. From the application of the proposed methods, investigators can answer the research questions include characterizing or describing the growth patterns at both group and individual level, identifying the time points where important changes occur and unusual subjects, selecting suitable statistical models, and suggesting possible within-error variance.Keywords: Data exploration, exploratory analysis, HLMs/LMEs, longitudinal data, visual-graphical methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20957298 Developing Efficient Testing and Unloading Procedures for a Local Sewage Holding Pit
Authors: Esra E. Aleisa
Abstract:
A local municipality has decided to build a sewage pit to receive residential sewage waste arriving by tank trucks. Daily accumulated waste are to be pumped to a nearby waste water treatment facility to be re-consumed for agricultural and construction projects. A discrete-event simulation model using Arena Software was constructed to assist in defining the capacity of the system in cubic meters, number of tank trucks to use the system, number of unload docks required, number of standby areas needed and manpower required for data collection at entrance checkpoint and truck tank load toxicity testing. The results of the model are statistically validated. Simulation turned out to be an excellent tool in the facility planning effort for the pit project, as it insured smooth flow lines of tank trucks load discharge and best utilization of facilities on site.Keywords: Discrete-event simulation, Facilities Planning, Layout, Pit, Sewage management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16857297 Potentials of Raphia hookeri Wine in Livelihood Sustenance among Rural and Urban Populations in Nigeria
Authors: A. A. Aiyeloja, A.T. Oladele, O. Tumulo
Abstract:
Raphia wine is an important forest product with cultural significance besides its use as medicine and food in southern Nigeria. This work aims to evaluate the profitability of Raphia wine production and marketing in Sapele Local Government Area, Nigeria. Four communities (Sapele, Ogiede, Okuoke and Elume) were randomly selected for data collection via questionnaires among producers and marketers. A total of 50 producers and 34 marketers were randomly selected for interview. Data was analyzed using descriptive statistics, profit margin, multiple regression and rate of returns on investment (RORI). Annual average profit was highest in Okuoke (Producers – N90, 000.00, Marketers - N70, 000.00) and least in Sapele (Producers N50, 000.00, Marketers – N45, 000.00). Calculated RORI for marketers were Elume (40.0%), Okuoke (25.0%), Ogiede (33.3%) and Sapele (50.0%). Regression results showed that location has significant effects (0.000, ρ ≤ 0.05) on profit margins. Male (58.8%) and female (41.2%) invest in Raphia wine marketing, while males (100.0%) dominate production. Results showed that Raphia wine has potentials to generate household income, enhance food security and improve quality of life in rural, semi-urban and urban communities. Improved marketing channels, storage facilities and credit facilities via cooperative groups are recommended for producers and marketers by concerned agencies.
Keywords: Raphia wine, Profit margin, RORI, Livelihood, Nigeria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24287296 Automatic Enhanced Update Summary Generation System for News Documents
Authors: S. V. Kogilavani, C. S. Kanimozhiselvi, S. Malliga
Abstract:
Fast changing knowledge systems on the Internet can be accessed more efficiently with the help of automatic document summarization and updating techniques. The aim of multi-document update summary generation is to construct a summary unfolding the mainstream of data from a collection of documents based on the hypothesis that the user has already read a set of previous documents. In order to provide a lot of semantic information from the documents, deeper linguistic or semantic analysis of the source documents were used instead of relying only on document word frequencies to select important concepts. In order to produce a responsive summary, meaning oriented structural analysis is needed. To address this issue, the proposed system presents a document summarization approach based on sentence annotation with aspects, prepositions and named entities. Semantic element extraction strategy is used to select important concepts from documents which are used to generate enhanced semantic summary.
Keywords: Aspects, named entities, prepositions, update summary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21347295 A Materialized Approach to the Integration of XML Documents: the OSIX System
Authors: H. Ahmad, S. Kermanshahani, A. Simonet, M. Simonet
Abstract:
The data exchanged on the Web are of different nature from those treated by the classical database management systems; these data are called semi-structured data since they do not have a regular and static structure like data found in a relational database; their schema is dynamic and may contain missing data or types. Therefore, the needs for developing further techniques and algorithms to exploit and integrate such data, and extract relevant information for the user have been raised. In this paper we present the system OSIX (Osiris based System for Integration of XML Sources). This system has a Data Warehouse model designed for the integration of semi-structured data and more precisely for the integration of XML documents. The architecture of OSIX relies on the Osiris system, a DL-based model designed for the representation and management of databases and knowledge bases. Osiris is a viewbased data model whose indexing system supports semantic query optimization. We show that the problem of query processing on a XML source is optimized by the indexing approach proposed by Osiris.Keywords: Data integration, semi-structured data, views, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15907294 A Survey of 2nd Year Students’ Frequent English Writing Errors and the Effects of Participatory Error Correction Process
Authors: Chaiwat Tantarangsee
Abstract:
The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 52 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.00. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally the students’ overall satisfaction of the participatory error correction process is in high level (Mean = 4.39, S.D. = 0.76).
Keywords: Coded indirect corrective feedback, participatory error correction process, error treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17827293 Data-Driven Decision-Making in Digital Entrepreneurship
Authors: Abeba Nigussie Turi, Xiangming Samuel Li
Abstract:
Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.
Keywords: Startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8297292 Classifying Bio-Chip Data using an Ant Colony System Algorithm
Authors: Minsoo Lee, Yearn Jeong Kim, Yun-mi Kim, Sujeung Cheong, Sookyung Song
Abstract:
Bio-chips are used for experiments on genes and contain various information such as genes, samples and so on. The two-dimensional bio-chips, in which one axis represent genes and the other represent samples, are widely being used these days. Instead of experimenting with real genes which cost lots of money and much time to get the results, bio-chips are being used for biological experiments. And extracting data from the bio-chips with high accuracy and finding out the patterns or useful information from such data is very important. Bio-chip analysis systems extract data from various kinds of bio-chips and mine the data in order to get useful information. One of the commonly used methods to mine the data is classification. The algorithm that is used to classify the data can be various depending on the data types or number characteristics and so on. Considering that bio-chip data is extremely large, an algorithm that imitates the ecosystem such as the ant algorithm is suitable to use as an algorithm for classification. This paper focuses on finding the classification rules from the bio-chip data using the Ant Colony algorithm which imitates the ecosystem. The developed system takes in consideration the accuracy of the discovered rules when it applies it to the bio-chip data in order to predict the classes.Keywords: Ant Colony System, DNA chip data, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14697291 Energy Consumption, Emission Absorption and Carbon Emission Reduction on Semarang State University Campus
Authors: Dewi Liesnoor Setyowati, Puji Hardati, Tri Marhaeni Puji Astuti, Muhammad Amin
Abstract:
Universitas Negeri Semarang (UNNES) is a university with a vision of conservation. The impact of the UNNES conservation is the existence of a positive response from the community for the effort of greening the campus and the planting of conservation value in the academic community. But in reality, energy consumption in UNNES campus tends to increase. The objectives of the study were to analyze the energy consumption in the campus area, to analyze the absorption of emissions by trees and the awareness of UNNES citizens in reducing emissions. Research focuses on energy consumption, carbon emissions, and awareness of citizens in reducing emissions. Research subjects in this study are UNNES citizens (lecturers, students and employees). The research area covers 6 faculties and one administrative center building. Data collection is done by observation, interview and documentation. The research used a quantitative descriptive method to analyze the data. The number of trees in UNNES is 10,264. Total emission on campus UNNES is 7.862.281.56 kg/year, the tree absorption is 6,289,250.38 kg/year. In UNNES campus area there are still 1,575,031.18 kg/year of emissions, not yet absorbed by trees. There are only two areas of the faculty whose trees are capable of absorbing emissions. The awareness of UNNES citizens in reducing energy consumption is seen in change the habit of: using energy-saving equipment (65%); reduce energy consumption per unit (68%); do energy literacy for UNNES citizens (74%). UNNES leaders always provide motivation to the citizens of UNNES, to reduce and change patterns of energy consumption.
Keywords: Energy consumption, carbon emission absorption, emission reduction, energy literation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8477290 A Query Optimization Strategy for Autonomous Distributed Database Systems
Authors: Dina K. Badawy, Dina M. Ibrahim, Alsayed A. Sallam
Abstract:
Distributed database is a collection of logically related databases that cooperate in a transparent manner. Query processing uses a communication network for transmitting data between sites. It refers to one of the challenges in the database world. The development of sophisticated query optimization technology is the reason for the commercial success of database systems, which complexity and cost increase with increasing number of relations in the query. Mariposa, query trading and query trading with processing task-trading strategies developed for autonomous distributed database systems, but they cause high optimization cost because of involvement of all nodes in generating an optimal plan. In this paper, we proposed a modification on the autonomous strategy K-QTPT that make the seller’s nodes with the lowest cost have gradually high priorities to reduce the optimization time. We implement our proposed strategy and present the results and analysis based on those results.
Keywords: Autonomous strategies, distributed database systems, high priority, query optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10587289 Trust and Reliability for Public Sector Data
Authors: Klaus Stranacher, Vesna Krnjic, Thomas Zefferer
Abstract:
The public sector holds large amounts of data of various areas such as social affairs, economy, or tourism. Various initiatives such as Open Government Data or the EU Directive on public sector information aim to make these data available for public and private service providers. Requirements for the provision of public sector data are defined by legal and organizational frameworks. Surprisingly, the defined requirements hardly cover security aspects such as integrity or authenticity. In this paper we discuss the importance of these missing requirements and present a concept to assure the integrity and authenticity of provided data based on electronic signatures. We show that our concept is perfectly suitable for the provisioning of unaltered data. We also show that our concept can also be extended to data that needs to be anonymized before provisioning by incorporating redactable signatures. Our proposed concept enhances trust and reliability of provided public sector data.Keywords: Trusted Public Sector Data, Integrity, Authenticity, Reliability, Redactable Signatures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17597288 Analysis of Relation between Unlabeled and Labeled Data to Self-Taught Learning Performance
Authors: Ekachai Phaisangittisagul, Rapeepol Chongprachawat
Abstract:
Obtaining labeled data in supervised learning is often difficult and expensive, and thus the trained learning algorithm tends to be overfitting due to small number of training data. As a result, some researchers have focused on using unlabeled data which may not necessary to follow the same generative distribution as the labeled data to construct a high-level feature for improving performance on supervised learning tasks. In this paper, we investigate the impact of the relationship between unlabeled and labeled data for classification performance. Specifically, we will apply difference unlabeled data which have different degrees of relation to the labeled data for handwritten digit classification task based on MNIST dataset. Our experimental results show that the higher the degree of relation between unlabeled and labeled data, the better the classification performance. Although the unlabeled data that is completely from different generative distribution to the labeled data provides the lowest classification performance, we still achieve high classification performance. This leads to expanding the applicability of the supervised learning algorithms using unsupervised learning.Keywords: Autoencoder, high-level feature, MNIST dataset, selftaught learning, supervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18327287 User Intention Generation with Large Language Models Using Chain-of-Thought Prompting
Authors: Gangmin Li, Fan Yang
Abstract:
Personalized recommendation is crucial for any recommendation system. One of the techniques for personalized recommendation is to identify the intention. Traditional user intention identification uses the user’s selection when facing multiple items. This modeling relies primarily on historical behavior data resulting in challenges such as the cold start, unintended choice, and failure to capture intention when items are new. Motivated by recent advancements in Large Language Models (LLMs) like ChatGPT, we present an approach for user intention identification by embracing LLMs with Chain-of-Thought (CoT) prompting. We use the initial user profile as input to LLMs and design a collection of prompts to align the LLM's response through various recommendation tasks encompassing rating prediction, search and browse history, user clarification, etc. Our tests on real-world datasets demonstrate the improvements in recommendation by explicit user intention identification and, with that intention, merged into a user model.
Keywords: Personalized recommendation, generative user modeling, user intention identification, large language models, chain-of-thought prompting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 937286 Towards Development of Solution for Business Process-Oriented Data Analysis
Authors: M. Klimavicius
Abstract:
This paper proposes a modeling methodology for the development of data analysis solution. The Author introduce the approach to address data warehousing issues at the at enterprise level. The methodology covers the process of the requirements eliciting and analysis stage as well as initial design of data warehouse. The paper reviews extended business process model, which satisfy the needs of data warehouse development. The Author considers that the use of business process models is necessary, as it reflects both enterprise information systems and business functions, which are important for data analysis. The Described approach divides development into three steps with different detailed elaboration of models. The Described approach gives possibility to gather requirements and display them to business users in easy manner.Keywords: Data warehouse, data analysis, business processmanagement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13937285 Preliminary Overview of Data Mining Technology for Knowledge Management System in Institutions of Higher Learning
Authors: Muslihah Wook, Zawiyah M. Yusof, Mohd Zakree Ahmad Nazri
Abstract:
Data mining has been integrated into application systems to enhance the quality of the decision-making process. This study aims to focus on the integration of data mining technology and Knowledge Management System (KMS), due to the ability of data mining technology to create useful knowledge from large volumes of data. Meanwhile, KMS vitally support the creation and use of knowledge. The integration of data mining technology and KMS are popularly used in business for enhancing and sustaining organizational performance. However, there is a lack of studies that applied data mining technology and KMS in the education sector; particularly students- academic performance since this could reflect the IHL performance. Realizing its importance, this study seeks to integrate data mining technology and KMS to promote an effective management of knowledge within IHLs. Several concepts from literature are adapted, for proposing the new integrative data mining technology and KMS framework to an IHL.
Keywords: Data mining, Institutions of Higher Learning, Knowledge Management System, Students' academic performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21427284 Privacy Concerns and Law Enforcement Data Collection to Tackle Domestic and Sexual Violence
Authors: Francesca Radice
Abstract:
It has been observed that violent or coercive behaviour has been apparent from initial conversations on dating apps like Tinder. Child pornography, stalking, and coercive control are some criminal offences from dating apps, including women murdered after finding partners through Tinder. Police databases and predictive policing are novel approaches taken to prevent crime before harm is done. This research will investigate how police databases can be used in a privacy-preserving way to characterise users in terms of their potential for violent crime. Using the COPS database of NSW Police, we will explore how the past criminal record can be interpreted to yield a category of potential danger for each dating app user. It is up to the judgement of each subscriber on what degree of the potential danger they are prepared to enter into. Sentiment analysis is an area where research into natural language processing has made great progress over the last decade. This research will investigate how sentiment analysis can be used to interpret interchanges between dating app users to detect manipulative or coercive sentiments. These can be used to alert law enforcement if continued for a defined number of communications. One of the potential problems of this approach is the potential prejudice a categorisation can cause. Another drawback is the possibility of misinterpreting communications and involving law enforcement without reason. The approach will be thoroughly tested with cross-checks by human readers who verify both the level of danger predicted by the interpretation of the criminal record and the sentiment detected from personal messages. Even if only a few violent crimes can be prevented, the approach will have a tangible value for real people.
Keywords: Sentiment Analysis, data mining, predictive policing, virtual manipulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2607283 Towards a Secure Storage in Cloud Computing
Authors: Mohamed Elkholy, Ahmed Elfatatry
Abstract:
Cloud computing has emerged as a flexible computing paradigm that reshaped the Information Technology map. However, cloud computing brought about a number of security challenges as a result of the physical distribution of computational resources and the limited control that users have over the physical storage. This situation raises many security challenges for data integrity and confidentiality as well as authentication and access control. This work proposes a security mechanism for data integrity that allows a data owner to be aware of any modification that takes place to his data. The data integrity mechanism is integrated with an extended Kerberos authentication that ensures authorized access control. The proposed mechanism protects data confidentiality even if data are stored on an untrusted storage. The proposed mechanism has been evaluated against different types of attacks and proved its efficiency to protect cloud data storage from different malicious attacks.Keywords: Access control, data integrity, data confidentiality, Kerberos authentication, cloud security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772