Search results for: web usage data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26370

Search results for: web usage data

24450 Regulation and Transparency: The Case of Corporate Governance Disclosure on the Internet in the United Arab Emirates

Authors: Peter Oyelere, Fernando Zanella

Abstract:

Corporate governance is one of the most discussed and researched issues in recent times in countries around the world, with different countries developing and adopting different governance structures, models and mechanisms. While the Codes of corporate governance have been weaved into the regulatory fabrics of most countries, it is equally critically important that their mechanisms, procedures and practices be transparent, and be transparently communicated to all stakeholders. The Internet can be a very useful and cost-effective tool for the timely and voluntary communication of corporate governance matters to stakeholders. The current paper details the results of an investigation on the extent of which companies listed in the UAE are using the Internet for communicating corporate governance issues, matters and procedures. We surveyed the websites of companies listed on the two UAE Stock Exchanges – the Abu Dhabi Stock Exchange (ADX) and the Dubai Financial Market (DFM) – to find out their level and nature of usage of the Internet for corporate governance disclosures. Regulatory and policy implications of the results of our investigation, as well as other areas for further studies, are also presented in the paper.

Keywords: corporate governance, internet financial reporting, regulation, transparency, United Arab Emirates

Procedia PDF Downloads 365
24449 Data Collection in Protected Agriculture for Subsequent Big Data Analysis: Methodological Evaluation in Venezuela

Authors: Maria Antonieta Erna Castillo Holly

Abstract:

During the last decade, data analysis, strategic decision making, and the use of artificial intelligence (AI) tools in Latin American agriculture have been a challenge. In some countries, the availability, quality, and reliability of historical data, in addition to the current data recording methodology in the field, makes it difficult to use information systems, complete data analysis, and their support for making the right strategic decisions. This is something essential in Agriculture 4.0. where the increase in the global demand for fresh agricultural products of tropical origin, during all the seasons of the year requires a change in the production model and greater agility in the responses to the consumer market demands of quality, quantity, traceability, and sustainability –that means extensive data-. Having quality information available and updated in real-time on what, how much, how, when, where, at what cost, and the compliance with production quality standards represents the greatest challenge for sustainable and profitable agriculture in the region. The objective of this work is to present a methodological proposal for the collection of georeferenced data from the protected agriculture sector, specifically in production units (UP) with tall structures (Greenhouses), initially for Venezuela, taking the state of Mérida as the geographical framework, and horticultural products as target crops. The document presents some background information and explains the methodology and tools used in the 3 phases of the work: diagnosis, data collection, and analysis. As a result, an evaluation of the process is carried out, relevant data and dashboards are displayed, and the first satellite maps integrated with layers of information in a geographic information system are presented. Finally, some improvement proposals and tentatively recommended applications are added to the process, understanding that their objective is to provide better qualified and traceable georeferenced data for subsequent analysis of the information and more agile and accurate strategic decision making. One of the main points of this study is the lack of quality data treatment in the Latin America area and especially in the Caribbean basin, being one of the most important points how to manage the lack of complete official data. The methodology has been tested with horticultural products, but it can be extended to other tropical crops.

Keywords: greenhouses, protected agriculture, data analysis, geographic information systems, Venezuela

Procedia PDF Downloads 131
24448 Reliable Consensus Problem for Multi-Agent Systems with Sampled-Data

Authors: S. H. Lee, M. J. Park, O. M. Kwon

Abstract:

In this paper, reliable consensus of multi-agent systems with sampled-data is investigated. By using a suitable Lyapunov-Krasovskii functional and some techniques such as Wirtinger Inequality, Schur Complement and Kronecker Product, the results of this systems are obtained by solving a set of Linear Matrix Inequalities(LMIs). One numerical example is included to show the effectiveness of the proposed criteria.

Keywords: multi-agent, linear matrix inequalities (LMIs), kronecker product, sampled-data, Lyapunov method

Procedia PDF Downloads 528
24447 Establishing Ministerial Social Media Handles for Public Grievances Redressal and Reciprocation System

Authors: Ashish Kumar Dwivedi

Abstract:

Uttar Pradesh is largest part of Indian Federal system encapsulating twenty two million populations and has huge cultural, economic and religious diversity. The newly elected 18 months old state leadership of Uttar Pradesh has envisaged and initiated various proactive strides for the public grievance redressal and inclusive development schemes for all the sections of population from its very day of assumption of the office by Hon’ble Chief Minster Shri Yogi Adtiyanath. These initiatives also include Departmental responses via social media handles as Twitter, Facebook Page, and Web interaction. In the same course, every department of state government has been guided for the correct usage of verified social media handle separately and in co-ordination with other departments. These guidelines included making new WhatsApp groups to connect technocrats and politicians to communicate on common platform. Minister for Department of Infrastructure and Industrial Development, Shri Satish Mahana is a very popular leader and very intuitive statesman, has thousands of followers on social media and his accounts receive almost three hundred individually mentioned notifications from the various parts of Uttar Pradesh. These notifications primarily include problems related to livelihood and grievances concerned to department. To address these communications, a body of five experts has been set who are actively responding on various levels and increase bureaucratic engagements with marginalized sections of society. With reference to above background, this piece of research was conducted to analyze, categorize and derive effective implementation of public policies via social media platforms. This act of responsiveness has brought positive change in the mindset of population for the government, which was missed earlier. Department of Industrial Development in the Government is also inclined to attract investors aiming to become first trillion-dollar economy of India henceforth department also organized two major successful events in last one year. These events were also frame worked on social media platform to update 2.5 million population of state who is actively using social media in many ways. To analyze change scientifically, this study has been conducted and big data has been collected from October 2017 to September 2018 from the departmental social media handles as Twitter, Facebook, and emails. For this data, a statistical study has been conducted to analyze sentiments and expectations, specific and common requirement of communities, nature of grievances and their effective elucidation fitted into government policies. The control sample has also been taken from previous government activities to analyze the change. The statistical study used tools such as correlation study and principal component analysis. Also in this research communication, the modus operandi of grievance redressal, proliferation of government policies, connections to their beneficiaries and quick response procedure will be discussed.

Keywords: correlation study, principal component analysis, bureaucratic engagements, social media

Procedia PDF Downloads 125
24446 An AK-Chart for the Non-Normal Data

Authors: Chia-Hau Liu, Tai-Yue Wang

Abstract:

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data

Procedia PDF Downloads 423
24445 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP

Procedia PDF Downloads 99
24444 Panel Application for Determining Impact of Real Exchange Rate and Security on Tourism Revenues: Countries with Middle and High Level Tourism Income

Authors: M. Koray Cetin, Mehmet Mert

Abstract:

The purpose of the study is to examine impacts on tourism revenues of the exchange rate and country overall security level. There are numerous studies that examine the bidirectional relation between macroeconomic factors and tourism revenues and tourism demand. Most of the studies support the existence of impact of tourism revenues on growth rate but not vice versa. Few studies examine the impact of factors like real exchange rate or purchasing power parity on the tourism revenues. In this context, firstly impact of real exchange rate on tourism revenues examination is aimed. Because exchange rate is one of the main determinants of international tourism services price in guests currency unit. Another determinant of tourism demand for a country is country’s overall security level. This issue can be handled in the context of the relationship between tourism revenues and overall security including turmoil, terrorism, border problem, political violence. In this study, factors are handled for several countries which have tourism revenues on a certain level. With this structure, it is a panel data, and it is evaluated with panel data analysis techniques. Panel data have at least two dimensions, and one of them is time dimensions. The panel data analysis techniques are applied to data gathered from Worldbank data web page. In this study, it is expected to find impacts of real exchange rate and security factors on tourism revenues for the countries that have noteworthy tourism revenues.

Keywords: exchange rate, panel data analysis, security, tourism revenues

Procedia PDF Downloads 351
24443 Longitudinal Analysis of Internet Speed Data in the Gulf Cooperation Council Region

Authors: Musab Isah

Abstract:

This paper presents a longitudinal analysis of Internet speed data in the Gulf Cooperation Council (GCC) region, focusing on the most populous cities of each of the six countries – Riyadh, Saudi Arabia; Dubai, UAE; Kuwait City, Kuwait; Doha, Qatar; Manama, Bahrain; and Muscat, Oman. The study utilizes data collected from the Measurement Lab (M-Lab) infrastructure over a five-year period from January 1, 2019, to December 31, 2023. The analysis includes downstream and upstream throughput data for the cities, covering significant events such as the launch of 5G networks in 2019, COVID-19-induced lockdowns in 2020 and 2021, and the subsequent recovery period and return to normalcy. The results showcase substantial increases in Internet speeds across the cities, highlighting improvements in both download and upload throughput over the years. All the GCC countries have achieved above-average Internet speeds that can conveniently support various online activities and applications with excellent user experience.

Keywords: internet data science, internet performance measurement, throughput analysis, internet speed, measurement lab, network diagnostic tool

Procedia PDF Downloads 62
24442 A Generalized Space-Efficient Algorithm for Quantum Bit String Comparators

Authors: Khuram Shahzad, Omar Usman Khan

Abstract:

Quantum bit string comparators (QBSC) operate on two sequences of n-qubits, enabling the determination of their relationships, such as equality, greater than, or less than. This is analogous to the way conditional statements are used in programming languages. Consequently, QBSCs play a crucial role in various algorithms that can be executed or adapted for quantum computers. The development of efficient and generalized comparators for any n-qubit length has long posed a challenge, as they have a high-cost footprint and lead to quantum delays. Comparators that are efficient are associated with inputs of fixed length. As a result, comparators without a generalized circuit cannot be employed at a higher level, though they are well-suited for problems with limited size requirements. In this paper, we introduce a generalized design for the comparison of two n-qubit logic states using just two ancillary bits. The design is examined on the basis of qubit requirements, ancillary bit usage, quantum cost, quantum delay, gate operations, and circuit complexity and is tested comprehensively on various input lengths. The work allows for sufficient flexibility in the design of quantum algorithms, which can accelerate quantum algorithm development.

Keywords: quantum comparator, quantum algorithm, space-efficient comparator, comparator

Procedia PDF Downloads 15
24441 A Web Service Based Sensor Data Management System

Authors: Rose A. Yemson, Ping Jiang, Oyedeji L. Inumoh

Abstract:

The deployment of wireless sensor network has rapidly increased, however with the increased capacity and diversity of sensors, and applications ranging from biological, environmental, military etc. generates tremendous volume of data’s where more attention is placed on the distributed sensing and little on how to manage, analyze, retrieve and understand the data generated. This makes it more quite difficult to process live sensor data, run concurrent control and update because sensor data are either heavyweight, complex, and slow. This work will focus on developing a web service platform for automatic detection of sensors, acquisition of sensor data, storage of sensor data into a database, processing of sensor data using reconfigurable software components. This work will also create a web service based sensor data management system to monitor physical movement of an individual wearing wireless network sensor technology (SunSPOT). The sensor will detect movement of that individual by sensing the acceleration in the direction of X, Y and Z axes accordingly and then send the sensed reading to a database that will be interfaced with an internet platform. The collected sensed data will determine the posture of the person such as standing, sitting and lying down. The system is designed using the Unified Modeling Language (UML) and implemented using Java, JavaScript, html and MySQL. This system allows real time monitoring an individual closely and obtain their physical activity details without been physically presence for in-situ measurement which enables you to work remotely instead of the time consuming check of an individual. These details can help in evaluating an individual’s physical activity and generate feedback on medication. It can also help in keeping track of any mandatory physical activities required to be done by the individuals. These evaluations and feedback can help in maintaining a better health status of the individual and providing improved health care.

Keywords: HTML, java, javascript, MySQL, sunspot, UML, web-based, wireless network sensor

Procedia PDF Downloads 212
24440 Usage of Military Continuity Management System for Supporting of Emergency Management

Authors: Radmila Hajkova, Jiri Palecek, Hana Malachova, Alena Oulehlova

Abstract:

Ensuring of continuity of business is the basic strategy of every company. Continuity of organization activities includes comprehensive procedures that help in solving unexpected situations of natural and anthropogenic character (for example flood, blaze, economic situations). Planning of continuity operations is a process that helps identify critical processes and implement plans for the security and recovery of key processes. The aim of this article demonstrates application of system approach to managing business continuity called business continuity management systems in military issues. This article describes the life cycle of business continuity management which is based on the established cycle PDCA (plan-do-check-act). After this is carried out by activities which are making by the University of Defence during activation of forces and means of the Integrated rescue system in case of emergencies - accidents at a nuclear power plant in Czech republic. Activities of various stages of deployment earmarked forces and resources are managed and evaluated by using MCMS application (military continuity management system).

Keywords: business continuity management system, emergency management, military, nuclear safety

Procedia PDF Downloads 455
24439 The Usefulness of Financial Certification in Taiwan

Authors: Chih-Mei Wang, Jon-Chao Hong, Jian-Hong Ye, Jing-Yun Fan, Chiao-Fei Lin

Abstract:

The value of a certificate is to implement the criteria for evaluating work ability. Some professional certificates may make people feel good, but they are not useful in the workplace. To address this issue, this study is based on the expectancy-value model to take financial certificates as an example to explore how participants perceived the value of obtaining certification related to their usage perception of career promotion and salary increase. A total of 339 valid samples were subjected to confirmatory factor analysis and structural equation modeling; the results showed that the number of professional certificates was not significantly correlated with career promotion, but the number of professional certificates is negatively related to salary and benefits (S&B), while career promotion and S&B were positively related to job performance. The results show that the number of professional certificates does not play a significant role in the expectancy-value model. Therefore, professional certifications related to a basic level of finance was not expected to obtain in Taiwan's financial industry, and it is important to study the usefulness of some other certificates in other competitive industry.

Keywords: career promotion, certificate, compensation and benefits, goal-directed behaviors, Job performance

Procedia PDF Downloads 192
24438 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 76
24437 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 316
24436 A Review of Digital Twins to Reduce Emission in the Construction Industry

Authors: Zichao Zhang, Yifan Zhao, Samuel Court

Abstract:

The carbon emission problem of the traditional construction industry has long been a pressing issue. With the growing emphasis on environmental protection and advancement of science and technology, the organic integration of digital technology and emission reduction has gradually become a mainstream solution. Among various sophisticated digital technologies, digital twins, which involve creating virtual replicas of physical systems or objects, have gained enormous attention in recent years as tools to improve productivity, optimize management and reduce carbon emissions. However, the relatively high implementation costs including finances, time, and manpower associated with digital twins have limited their widespread adoption. As a result, most of the current applications are primarily concentrated within a few industries. In addition, the creation of digital twins relies on a large amount of data and requires designers to possess exceptional skills in information collection, organization, and analysis. Unfortunately, these capabilities are often lacking in the traditional construction industry. Furthermore, as a relatively new concept, digital twins have different expressions and usage methods across different industries. This lack of standardized practices poses a challenge in creating a high-quality digital twin framework for construction. This paper firstly reviews the current academic studies and industrial practices focused on reducing greenhouse gas emissions in the construction industry using digital twins. Additionally, it identifies the challenges that may be encountered during the design and implementation of a digital twin framework specific to this industry and proposes potential directions for future research. This study shows that digital twins possess substantial potential and significance in enhancing the working environment within the traditional construction industry, particularly in their ability to support decision-making processes. It proves that digital twins can improve the work efficiency and energy utilization of related machinery while helping this industry save energy and reduce emissions. This work will help scholars in this field to better understand the relationship between digital twins and energy conservation and emission reduction, and it also serves as a conceptual reference for practitioners to implement related technologies.

Keywords: digital twins, emission reduction, construction industry, energy saving, life cycle, sustainability

Procedia PDF Downloads 101
24435 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education

Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue

Abstract:

In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.

Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education

Procedia PDF Downloads 108
24434 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data

Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad

Abstract:

Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.

Keywords: remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction

Procedia PDF Downloads 340
24433 Automated Multisensory Data Collection System for Continuous Monitoring of Refrigerating Appliances Recycling Plants

Authors: Georgii Emelianov, Mikhail Polikarpov, Fabian Hübner, Jochen Deuse, Jochen Schiemann

Abstract:

Recycling refrigerating appliances plays a major role in protecting the Earth's atmosphere from ozone depletion and emissions of greenhouse gases. The performance of refrigerator recycling plants in terms of material retention is the subject of strict environmental certifications and is reviewed periodically through specialized audits. The continuous collection of Refrigerator data required for the input-output analysis is still mostly manual, error-prone, and not digitalized. In this paper, we propose an automated data collection system for recycling plants in order to deduce expected material contents in individual end-of-life refrigerating appliances. The system utilizes laser scanner measurements and optical data to extract attributes of individual refrigerators by applying transfer learning with pre-trained vision models and optical character recognition. Based on Recognized features, the system automatically provides material categories and target values of contained material masses, especially foaming and cooling agents. The presented data collection system paves the way for continuous performance monitoring and efficient control of refrigerator recycling plants.

Keywords: automation, data collection, performance monitoring, recycling, refrigerators

Procedia PDF Downloads 164
24432 Sales Patterns Clustering Analysis on Seasonal Product Sales Data

Authors: Soojin Kim, Jiwon Yang, Sungzoon Cho

Abstract:

As a seasonal product is only in demand for a short time, inventory management is critical to profits. Both markdowns and stockouts decrease the return on perishable products; therefore, researchers have been interested in the distribution of seasonal products with the aim of maximizing profits. In this study, we propose a data-driven seasonal product sales pattern analysis method for individual retail outlets based on observed sales data clustering; the proposed method helps in determining distribution strategies.

Keywords: clustering, distribution, sales pattern, seasonal product

Procedia PDF Downloads 595
24431 EduEasy: Smart Learning Assistant System

Authors: A. Karunasena, P. Bandara, J. A. T. P. Jayasuriya, P. D. Gallage, J. M. S. D. Jayasundara, L. A. P. Y. P. Nuwanjaya

Abstract:

Usage of smart learning concepts has increased rapidly all over the world recently as better teaching and learning methods. Most educational institutes such as universities are experimenting those concepts with their students. Smart learning concepts are especially useful for students to learn better in large classes. In large classes, the lecture method is the most popular method of teaching. In the lecture method, the lecturer presents the content mostly using lecture slides, and the students make their own notes based on the content presented. However, some students may find difficulties with the above method due to various issues such as speed in delivery. The purpose of this research is to assist students in large classes in the following content. The research proposes a solution with four components, namely note-taker, slide matcher, reference finder, and question presenter, which are helpful for the students to obtain a summarized version of the lecture note, easily navigate to the content and find resources, and revise content using questions.

Keywords: automatic summarization, extractive text summarization, speech recognition library, sentence extraction, automatic web search, automatic question generator, sentence scoring, the term weight

Procedia PDF Downloads 148
24430 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 493
24429 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 364
24428 Evaluating the Effectiveness of Science Teacher Training Programme in National Colleges of Education: a Preliminary Study, Perceptions of Prospective Teachers

Authors: A. S. V Polgampala, F. Huang

Abstract:

This is an overview of what is entailed in an evaluation and issues to be aware of when class observation is being done. This study examined the effects of evaluating teaching practice of a 7-day ‘block teaching’ session in a pre -service science teacher training program at a reputed National College of Education in Sri Lanka. Effects were assessed in three areas: evaluation of the training process, evaluation of the training impact, and evaluation of the training procedure. Data for this study were collected by class observation of 18 teachers during 9th February to 16th of 2017. Prospective teachers of science teaching, the participants of the study were evaluated based on newly introduced format by the NIE. The data collected was analyzed qualitatively using the Miles and Huberman procedure for analyzing qualitative data: data reduction, data display and conclusion drawing/verification. It was observed that the trainees showed their confidence in teaching those competencies and skills. Teacher educators’ dissatisfaction has been a great impact on evaluation process.

Keywords: evaluation, perceptions & perspectives, pre-service, science teachering

Procedia PDF Downloads 315
24427 Detecting Venomous Files in IDS Using an Approach Based on Data Mining Algorithm

Authors: Sukhleen Kaur

Abstract:

In security groundwork, Intrusion Detection System (IDS) has become an important component. The IDS has received increasing attention in recent years. IDS is one of the effective way to detect different kinds of attacks and malicious codes in a network and help us to secure the network. Data mining techniques can be implemented to IDS, which analyses the large amount of data and gives better results. Data mining can contribute to improving intrusion detection by adding a level of focus to anomaly detection. So far the study has been carried out on finding the attacks but this paper detects the malicious files. Some intruders do not attack directly, but they hide some harmful code inside the files or may corrupt those file and attack the system. These files are detected according to some defined parameters which will form two lists of files as normal files and harmful files. After that data mining will be performed. In this paper a hybrid classifier has been used via Naive Bayes and Ripper classification methods. The results show how the uploaded file in the database will be tested against the parameters and then it is characterised as either normal or harmful file and after that the mining is performed. Moreover, when a user tries to mine on harmful file it will generate an exception that mining cannot be made on corrupted or harmful files.

Keywords: data mining, association, classification, clustering, decision tree, intrusion detection system, misuse detection, anomaly detection, naive Bayes, ripper

Procedia PDF Downloads 414
24426 Generalized Approach to Linear Data Transformation

Authors: Abhijith Asok

Abstract:

This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.

Keywords: data transformation, dummy dimension, linear transformation, scaling

Procedia PDF Downloads 297
24425 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 100
24424 Study on Beta-Ray Detection System in Water Using a MCNP Simulation

Authors: Ki Hyun Park, Hye Min Park, Jeong Ho Kim, Chan Jong Park, Koan Sik Joo

Abstract:

In the modern days, the use of radioactive substances is on the rise in the areas like chemical weaponry, industrial usage, and power plants. Although there are various technologies available to detect and monitor radioactive substances in the air, the technologies to detect underwater radioactive substances are scarce. In this study, computer simulation of the underwater detection system measuring beta-ray, a radioactive substance, has been done through MCNP. CaF₂, YAP(Ce) and YAG(Ce) have been used in the computer simulation to detect beta-ray as scintillator. Also, the source used in the computer simulation is Sr-90 and Y-90, both of them emitting only pure beta-ray. The distance between the source and the detector was shifted from 1mm to 10mm by 1 mm in the computer simulation. The result indicated that Sr-90 was impossible to measure below 1 mm since its emission energy is low while Y-90 was able to be measured up to 10mm underwater. In addition, the detector designed with CaF₂ had the highest efficiency among 3 scintillators used in the computer simulation. Since it was possible to verify the detectable range and the detection efficiency according to modeling through MCNP simulation, it is expected that such result will reduce the time and cost in building the actual beta-ray detector and evaluating its performances, thereby contributing the research and development.

Keywords: Beta-ray, CaF₂, detector, MCNP simulation, scintillator

Procedia PDF Downloads 510
24423 Chinese Language Teaching as a Second Language: Immersion Teaching

Authors: Lee Bih Ni, Kiu Su Na

Abstract:

This paper discusses the Chinese Language Teaching as a Second Language by focusing on Immersion Teaching. Researchers used narrative literature review to describe the current states of both art and science in focused areas of inquiry. Immersion teaching comes with a standard that teachers must reliably meet. Chinese language-immersion instruction consists of language and content lessons, including functional usage of the language, academic language, authentic language, and correct Chinese sociocultural language. Researchers used narrative literature reviews to build a scientific knowledge base. Researchers collected all the important points of discussion, and put them here with reference to the specific field where this paper is originally based on. The findings show that Chinese Language in immersion teaching is not like standard foreign language classroom; immersion setting provides more opportunities to teach students colloquial language than academic. Immersion techniques also introduce a language’s cultural and social contexts in a meaningful and memorable way. It is particularly important that immersion teachers connect classwork with real-life experiences. Immersion also includes more elements of discovery and inquiry based learning than do other kinds of instructional practices. Students are always and consistently interpreted the conclusions and context clues.

Keywords: a second language, Chinese language teaching, immersion teaching, instructional strategies

Procedia PDF Downloads 452
24422 An Empirical Study of Determinants Influencing Telemedicine Services Acceptance by Healthcare Professionals: Case of Selected Hospitals in Ghana

Authors: Jonathan Kissi, Baozhen Dai, Wisdom W. K. Pomegbe, Abdul-Basit Kassim

Abstract:

Protecting patient’s digital information is a growing concern for healthcare institutions as people nowadays perpetually live their lives through telemedicine services. These telemedicine services have been confronted with several determinants that hinder their successful implementations, especially in developing countries. Identifying such determinants that influence the acceptance of telemedicine services is also a problem for healthcare professionals. Despite the tremendous increase in telemedicine services, its adoption, and use has been quite slow in some healthcare settings. Generally, it is accepted in today’s globalizing world that the success of telemedicine services relies on users’ satisfaction. Satisfying health professionals and patients are one of the crucial objectives of telemedicine success. This study seeks to investigate the determinants that influence health professionals’ intention to utilize telemedicine services in clinical activities in a sub-Saharan African country in West Africa (Ghana). A hybridized model comprising of health adoption models, including technology acceptance theory, diffusion of innovation theory, and protection of motivation theory, were used to investigate these quandaries. The study was carried out in four government health institutions that apply and regulate telemedicine services in their clinical activities. A structured questionnaire was developed and used for data collection. Purposive and convenience sampling methods were used in the selection of healthcare professionals from different medical fields for the study. The collected data were analyzed based on structural equation modeling (SEM) approach. All selected constructs showed a significant relationship with health professional’s behavioral intention in the direction expected from prior literature including perceived usefulness, perceived ease of use, management strategies, financial sustainability, communication channels, patients security threat, patients privacy risk, self efficacy, actual service use, user satisfaction, and telemedicine services systems securities threat. Surprisingly, user characteristics and response efficacy of health professionals were not significant in the hybridized model. The findings and insights from this research show that health professionals are pragmatic when making choices for technology applications and also their willingness to use telemedicine services. They are, however, anxious about its threats and coping appraisals. The identified significant constructs in the study may help to increase efficiency, quality of services, quality patient care delivery, and satisfactory user satisfaction among healthcare professionals. The implantation and effective utilization of telemedicine services in the selected hospitals will aid as a strategy to eradicate hardships in healthcare services delivery. The service will help attain universal health access coverage to all populace. This study contributes to empirical knowledge by identifying the vital factors influencing health professionals’ behavioral intentions to adopt telemedicine services. The study will also help stakeholders of healthcare to formulate better policies towards telemedicine service usage.

Keywords: telemedicine service, perceived usefulness, perceived ease of use, management strategies, security threats

Procedia PDF Downloads 140
24421 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh

Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter

Abstract:

Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.

Keywords: land cover change, land surface temperature, normalized difference vegetation index, urban heat island

Procedia PDF Downloads 272