Search results for: Information and data requirements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31850

Search results for: Information and data requirements

31400 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility

Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha

Abstract:

Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.

Keywords: data citation, data reuse, research data sharing, webometrics

Procedia PDF Downloads 170
31399 Value Relevance of Accounting Information: A Study of Steel Sector in India

Authors: Pradyumna Mohanty

Abstract:

The paper aims to explore whether accounting information of Indian companies in the Steel sector are value relevant or not. Ohlson’s model which usually takes into consideration book value per share (BV) and earnings per share (EARN) has been used and the same has been expanded to include two more variables such as cash flow from operations (CFO) and return on equity (ROE). The data were collected from CMIE-Prowess data base in respect of BSE-listed steel companies and the time frame spans from 2010 to 2014. OLS regression has been used to test the value relevance of these accounting numbers. Results indicate that both CFO and BV are having significant influence on the stock price in two out of five years of study. But, BV is emerging as the most significant and highly value relevant of all the four variables during the entire period of study.

Keywords: value relevance, accounting information, book value per share, earnings per share

Procedia PDF Downloads 155
31398 Ethiopian Textile and Apparel Industry: Study of the Information Technology Effects in the Sector to Improve Their Integrity Performance

Authors: Merertu Wakuma Rundassa

Abstract:

Global competition and rapidly changing customer requirements are forcing major changes in the production styles and configuration of manufacturing organizations. Increasingly, traditional centralized and sequential manufacturing planning, scheduling, and control mechanisms are being found insufficiently flexible to respond to changing production styles and highly dynamic variations in product requirements. The traditional approaches limit the expandability and reconfiguration capabilities of the manufacturing systems. Thus many business houses face increasing pressure to lower production cost, improve production quality and increase responsiveness to customers. In a textile and apparel manufacturing, globalization has led to increase in competition and quality awareness and these industries have changed tremendously in the last few years. So, to sustain competitive advantage, companies must re-examine and fine-tune their business processes to deliver high quality goods at very low costs and it has become very important for the textile and apparel industries to integrate themselves with information technology to survive. IT can create competitive advantages for companies to improve coordination and communication among trading partners, increase the availability of information for intermediaries and customers and provide added value at various stages along the entire chain. Ethiopia is in the process of realizing its potential as the future sourcing location for the global textile and garments industry. With a population of over 90 million people and the fastest growing non-oil economy in Africa, Ethiopia today represents limitless opportunities for international investors. For the textile and garments industry Ethiopia promises a low cost production location with natural resources such as cotton to enable the setup of vertically integrated textile and garment operation. However; due to lack of integration of their business activities textile and apparel industry of Ethiopia faced a problem in that it can‘t be competent in the global market. On the other hand the textile and apparel industries of other countries have changed tremendously in the last few years and globalization has led to increase in competition and quality awareness. So the aim of this paper is to study the trend of Ethiopian Textile and Apparel Industry on the application of different IT system to integrate them in the global market.

Keywords: information technology, business integrity, textile and apparel industries, Ethiopia

Procedia PDF Downloads 355
31397 Analysis and Forecasting of Bitcoin Price Using Exogenous Data

Authors: J-C. Leneveu, A. Chereau, L. Mansart, T. Mesbah, M. Wyka

Abstract:

Extracting and interpreting information from Big Data represent a stake for years to come in several sectors such as finance. Currently, numerous methods are used (such as Technical Analysis) to try to understand and to anticipate market behavior, with mixed results because it still seems impossible to exactly predict a financial trend. The increase of available data on Internet and their diversity represent a great opportunity for the financial world. Indeed, it is possible, along with these standard financial data, to focus on exogenous data to take into account more macroeconomic factors. Coupling the interpretation of these data with standard methods could allow obtaining more precise trend predictions. In this paper, in order to observe the influence of exogenous data price independent of other usual effects occurring in classical markets, behaviors of Bitcoin users are introduced in a model reconstituting Bitcoin value, which is elaborated and tested for prediction purposes.

Keywords: big data, bitcoin, data mining, social network, financial trends, exogenous data, global economy, behavioral finance

Procedia PDF Downloads 352
31396 Relationship among Teams' Information Processing Capacity and Performance in Information System Projects: The Effects of Uncertainty and Equivocality

Authors: Ouafa Sakka, Henri Barki, Louise Cote

Abstract:

Uncertainty and equivocality are defined in the information processing literature as two task characteristics that require different information processing responses from managers. As uncertainty often stems from a lack of information, addressing it is thought to require the collection of additional data. On the other hand, as equivocality stems from ambiguity and a lack of understanding of the task at hand, addressing it is thought to require rich communication between those involved. Past research has provided weak to moderate empirical support to these hypotheses. The present study contributes to this literature by defining uncertainty and equivocality at the project level and investigating their moderating effects on the association between several project information processing constructs and project performance. The information processing constructs considered are the amount of information collected by the project team, and the richness and frequency of formal communications among the team members to discuss the project’s follow-up reports. Data on 93 information system development (ISD) project managers was collected in a questionnaire survey and analyzed it via the Fisher Test for correlation differences. The results indicate that the highest project performance levels were observed in projects characterized by high uncertainty and low equivocality in which project managers were provided with detailed and updated information on project costs and schedules. In addition, our findings show that information about user needs and technical aspects of the project is less useful to managing projects where uncertainty and equivocality are high. Further, while the strongest positive effect of interactive use of follow-up reports on performance occurred in projects where both uncertainty and equivocality levels were high, its weakest effect occurred when both of these were low.

Keywords: uncertainty, equivocality, information processing model, management control systems, project control, interactive use, diagnostic use, information system development

Procedia PDF Downloads 286
31395 Self-Efficacy Perceptions of Pre-Service Art and Music Teachers towards the Use of Information and Communication Technologies

Authors: Agah Tugrul Korucu

Abstract:

Information and communication technologies have become an important part of our daily lives with significant investments in technology in the 21st century. Individuals are more willing to design and implement computer-related activities, and they are the main component of computer self-efficacy and self-efficacy related to the fact that the increase in information technology, with operations in parallel with these activities more successful. The Self-efficacy level is a significant factor which determines how individuals act in events, situations and difficult processes. It is observed that individuals with higher self-efficacy perception of computers who encounter problems related to computer use overcome them more easily. Therefore, this study aimed to examine self-efficacy perceptions of pre-service art and music teachers towards the use of information and communication technologies in terms of different variables. Research group consists of 60 pre-service teachers who are studying at Necmettin Erbakan University Ahmet Keleşoğlu Faculty of Education Art and Music department. As data collection tool of the study; “personal information form” developed by the researcher and used to collect demographic data and "the perception scale related to self-efficacy of informational technology" are used. The scale is 5-point Likert-type scale. It consists of 27 items. The Kaiser-Meyer-Olkin (KMO) sample compliance value is found 0.959. The Cronbach alpha reliability coefficient of the scale is found to be 0.97. computer-based statistical software package (SPSS 21.0) is used in order to analyze the data collected by data collection tools; descriptive statistics, t-test, analysis of variance are used as statistical techniques.

Keywords: self-efficacy perceptions, teacher candidate, information and communication technologies, art teacher

Procedia PDF Downloads 320
31394 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 138
31393 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level

Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown

Abstract:

‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.

Keywords: data integration, data linkage, health planning, health services research

Procedia PDF Downloads 214
31392 Best Practices to Enhance Patient Security and Confidentiality When Using E-Health in South Africa

Authors: Lethola Tshikose, Munyaradzi Katurura

Abstract:

Information and Communication Technology (ICT) plays a critical role in improving daily healthcare processes. The South African healthcare organizations have adopted Information Systems to integrate their patient records. This has made it much easier for healthcare organizations because patient information can now be accessible at any time. The primary purpose of this research study was to investigate the best practices that can be applied to enhance patient security and confidentiality when using e-health systems in South Africa. Security and confidentiality are critical in healthcare organizations as they ensure safety in EHRs. The research study used an inductive research approach that included a thorough literature review; therefore, no data was collected. The research paper’s scope included patient data and possible security threats associated with healthcare systems. According to the study, South African healthcare organizations discovered various patient data security and confidentiality issues. The study also revealed that when it comes to handling patient data, health professionals sometimes make mistakes. Some may not be computer literate, which posed issues and caused data to be tempered with. The research paper recommends that healthcare organizations ensure that security measures are adequately supported and promoted by their IT department. This will ensure that adequate resources are distributed to keep patient data secure and confidential. Healthcare organizations must correctly use standards set up by IT specialists to solve patient data security and confidentiality issues. Healthcare organizations must make sure that their organizational structures are adaptable to improve security and confidentiality.

Keywords: E-health, EHR, security, confidentiality, healthcare

Procedia PDF Downloads 48
31391 GraphNPP: A Graphormer-Based Architecture for Network Performance Prediction in Software-Defined Networking

Authors: Hanlin Liu, Hua Li, Yintan AI

Abstract:

Network performance prediction (NPP) is essential for the management and optimization of software-defined networking (SDN) and contributes to improving the quality of service (QoS) in SDN to meet the requirements of users. Although current deep learning-based methods can achieve high effectiveness, they still suffer from some problems, such as difficulty in capturing global information of the network, inefficiency in modeling end-to-end network performance, and inadequate graph feature extraction. To cope with these issues, our proposed Graphormer-based architecture for NPP leverages the powerful graph representation ability of Graphormer to effectively model the graph structure data, and a node-edge transformation algorithm is designed to transfer the feature extraction object from nodes to edges, thereby effectively extracting the end-to-end performance characteristics of the network. Moreover, routing oriented centrality measure coefficient for nodes and edges is proposed respectively to assess their importance and influence within the graph. Based on this coefficient, an enhanced feature extraction method and an advanced centrality encoding strategy are derived to fully extract the structural information of the graph. Experimental results on three public datasets demonstrate that the proposed GraphNPP architecture can achieve state-of-the-art results compared to current NPP methods.

Keywords: software-defined networking, network performance prediction, Graphormer, graph neural network

Procedia PDF Downloads 35
31390 Online Information Seeking: A Review of the Literature in the Health Domain

Authors: Sharifah Sumayyah Engku Alwi, Masrah Azrifah Azmi Murad

Abstract:

The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking, i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems which exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.

Keywords: information behavior, information seeking, online health information, technology acceptance model, the theory of planned behavior, UTAUT

Procedia PDF Downloads 266
31389 Data Management and Analytics for Intelligent Grid

Authors: G. Julius P. Roy, Prateek Saxena, Sanjeev Singh

Abstract:

Power distribution utilities two decades ago would collect data from its customers not later than a period of at least one month. The origin of SmartGrid and AMI has subsequently increased the sampling frequency leading to 1000 to 10000 fold increase in data quantity. This increase is notable and this steered to coin the tern Big Data in utilities. Power distribution industry is one of the largest to handle huge and complex data for keeping history and also to turn the data in to significance. Majority of the utilities around the globe are adopting SmartGrid technologies as a mass implementation and are primarily focusing on strategic interdependence and synergies of the big data coming from new information sources like AMI and intelligent SCADA, there is a rising need for new models of data management and resurrected focus on analytics to dissect data into descriptive, predictive and dictatorial subsets. The goal of this paper is to is to bring load disaggregation into smart energy toolkit for commercial usage.

Keywords: data management, analytics, energy data analytics, smart grid, smart utilities

Procedia PDF Downloads 774
31388 Secure Intelligent Information Management by Using a Framework of Virtual Phones-On Cloud Computation

Authors: Mohammad Hadi Khorashadi Zadeh

Abstract:

Many new applications and internet services have been emerged since the innovation of mobile networks and devices. However, these applications have problems of security, management, and performance in business environments. Cloud systems provide information transfer, management facilities, and security for virtual environments. Therefore, an innovative internet service and a business model are proposed in the present study for creating a secure and consolidated environment for managing the mobile information of organizations based on cloud virtual phones (CVP) infrastructures. Using this method, users can run Android and web applications in the cloud which enhance performance by connecting to other CVP users and increases privacy. It is possible to combine the CVP with distributed protocols and central control which mimics the behavior of human societies. This mix helps in dealing with sensitive data in mobile devices and facilitates data management with less application overhead.

Keywords: BYOD, mobile cloud computing, mobile security, information management

Procedia PDF Downloads 311
31387 Enhancing Healthcare Data Protection and Security

Authors: Joseph Udofia, Isaac Olufadewa

Abstract:

Everyday, the size of Electronic Health Records data keeps increasing as new patients visit health practitioner and returning patients fulfil their appointments. As these data grow, so is their susceptibility to cyber-attacks from criminals waiting to exploit this data. In the US, the damages for cyberattacks were estimated at $8 billion (2018), $11.5 billion (2019) and $20 billion (2021). These attacks usually involve the exposure of PII. Health data is considered PII, and its exposure carry significant impact. To this end, an enhancement of Health Policy and Standards in relation to data security, especially among patients and their clinical providers, is critical to ensure ethical practices, confidentiality, and trust in the healthcare system. As Clinical accelerators and applications that contain user data are used, it is expedient to have a review and revamp of policies like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Fast Healthcare Interoperability Resources (FHIR), all aimed to ensure data protection and security in healthcare. FHIR caters for healthcare data interoperability, FHIR caters to healthcare data interoperability, as data is being shared across different systems from customers to health insurance and care providers. The astronomical cost of implementation has deterred players in the space from ensuring compliance, leading to susceptibility to data exfiltration and data loss on the security accuracy of protected health information (PHI). Though HIPAA hones in on the security accuracy of protected health information (PHI) and PCI DSS on the security of payment card data, they intersect with the shared goal of protecting sensitive information in line with industry standards. With advancements in tech and the emergence of new technology, it is necessary to revamp these policies to address the complexity and ambiguity, cost barrier, and ever-increasing threats in cyberspace. Healthcare data in the wrong hands is a recipe for disaster, and we must enhance its protection and security to protect the mental health of the current and future generations.

Keywords: cloud security, healthcare, cybersecurity, policy and standard

Procedia PDF Downloads 79
31386 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records

Authors: Sara ElElimy, Samir Moustafa

Abstract:

Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).

Keywords: big data analytics, machine learning, CDRs, 5G

Procedia PDF Downloads 135
31385 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area

Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim

Abstract:

In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.

Keywords: data estimation, link data, machine learning, road network

Procedia PDF Downloads 507
31384 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 150
31383 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education

Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue

Abstract:

In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.

Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education

Procedia PDF Downloads 100
31382 Provision of Basic Water and Sanitation Services in South Africa through the Municipal Infrastructure Grant Programme

Authors: Elkington Sibusiso Mnguni

Abstract:

Although South Africa has made good progress in providing basic water and sanitation services to its citizens, there is still a large section of the population that has no access to these services. This paper reviews the performance of the government’s municipal infrastructure grant programme in providing basic water and sanitation services which are part of the constitutional requirements to the citizens. The method used to gather data and information was a desk top study which sought to review the progress made in rolling out the programme. The successes and challenges were highlighted and possible solutions were identified that can accelerate the elimination of the remaining backlogs and improve the level of service to the citizens. Currently, approximately 6.5 million citizens are without access to basic water services and approximately 10 million are without access to basic sanitation services.

Keywords: grant, municipal infrastructure, sanitation, services, water

Procedia PDF Downloads 134
31381 Image-Based (RBG) Technique for Estimating Phosphorus Levels of Different Crops

Authors: M. M. Ali, Ahmed Al- Ani, Derek Eamus, Daniel K. Y. Tan

Abstract:

In this glasshouse study, we developed the new image-based non-destructive technique for detecting leaf P status of different crops such as cotton, tomato and lettuce. Plants were allowed to grow on nutrient media containing different P concentrations, i.e. 0%, 50% and 100% of recommended P concentration (P0 = no P, L; P1 = 2.5 mL 10 L-1 of P and P2 = 5 mL 10 L-1 of P as NaH2PO4). After 10 weeks of growth, plants were harvested and data on leaf P contents were collected using the standard destructive laboratory method and at the same time leaf images were collected by a handheld crop image sensor. We calculated leaf area, leaf perimeter and RGB (red, green and blue) values of these images. This data was further used in the linear discriminant analysis (LDA) to estimate leaf P contents, which successfully classified these plants on the basis of leaf P contents. The data indicated that P deficiency in crop plants can be predicted using the image and morphological data. Our proposed non-destructive imaging method is precise in estimating P requirements of different crop species.

Keywords: image-based techniques, leaf area, leaf P contents, linear discriminant analysis

Procedia PDF Downloads 374
31380 Awareness about Authenticity of Health Care Information from Internet Sources among Health Care Students in Malaysia: A Teaching Hospital Study

Authors: Renjith George, Preethy Mary Donald

Abstract:

Use of internet sources to retrieve health care related information among health care professionals has increased tremendously as the accessibility to internet is made easier through smart phones and tablets. Though there are huge data available at a finger touch, it is doubtful whether all the sources providing health care information adhere to evidence based practice. The objective of this survey was to study the prevalence of use of internet sources to get health care information, to assess the mind-set towards the authenticity of health care information available via internet sources and to study the awareness about evidence based practice in health care among medical and dental students in Melaka-Manipal Medical College. The survey was proposed as there is limited number of studies reported in the literature and this is the first of its kind in Malaysia. A cross sectional survey was conducted among the medical and dental students of Melaka-Manipal Medical College. A total of 521 students including medical and dental students in their clinical years of undergraduate study participated in the survey. A questionnaire consisting of 14 questions were constructed based on data available from the published literature and focused group discussion and was pre-tested for validation. Data analysis was done using SPSS. The statistical analysis of the results of the survey proved that the use of internet resources for health care information are equally preferred over the conventional resources among health care students. Though majority of the participants verify the authenticity of information from internet sources, there was considerable percentage of candidates who feels that all the information from the internet can be utilised for clinical decision making or were not aware about the need of verification of authenticity of such information. 63.7 % of the participants rely on evidence based practice in health care for clinical decision making while 34.2 % were not aware about it. A minority of 2.1% did not agree with the concept of evidence based practice. The observations of the survey reveals the increasing use of internet resources for health care information among health care students. The results warrants the need to move towards evidence based practice in health care as all health care information available online may not be reliable. The health care person should be judicious while utilising the information from such resources for clinical decision making.

Keywords: authenticity, evidence based practice, health care information, internet

Procedia PDF Downloads 442
31379 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 353
31378 Appraisal of Parents' Views and Supervision of Their Children's Use of Information Communication Technology

Authors: Olabisi Adedigba

Abstract:

It is a fundamental truth that Information Communication Technology (ICT) lies at the very heart of our today’s society and determines its development. The use of ICT has given a boost to the educational and mental development of an average pupil of this age far above their counterparts who lived centuries ago. Nevertheless, the present age children stand the risk of the scourge of this technology if proactive measures are not taken urgently to arrest the damages of its negative use on them. One of the measures that can be taken is supervision of children’s use of ICT. This research therefore investigated parents’ views and supervision of their children’s use of Information Communication Technology. Descriptive design was adopted for this study. 300 parents were randomly selected. “Parents’ Views and Supervision of Children’s Use of ICT” was used to collect data for the study. Data collected were analyzed using percentage, mean, standard deviation and t-test. The result revealed that parents’ view of their children’s use of ICT is negative while supervision of their children’s use of ICT is low. Recommendations were thus offered that schools and other stakeholders should educate parents on children’s proper utilization of ICT and parents are urged to maintain adequate supervision on their children use of ICT.

Keywords: appraisal of parents’ views and supervision, children’s use, information communication technology, t-test

Procedia PDF Downloads 502
31377 Design and Validation of Different Steering Geometries for an All-Terrain Vehicle

Authors: Prabhsharan Singh, Rahul Sindhu, Piyush Sikka

Abstract:

The steering system is an integral part and medium through which the driver communicates with the vehicle and terrain, hence the most suitable steering geometry as per requirements must be chosen. The function of the chosen steering geometry of an All-Terrain Vehicle (ATV) is to provide the desired understeer gradient, minimum tire slippage, expected weight transfer during turning as these are requirements for a good steering geometry of a BAJA ATV. This research paper focuses on choosing the best suitable steering geometry for BAJA ATV tracks by reasoning the working principle and using fundamental trigonometric functions for obtaining these geometries on the same vehicle itself, namely Ackermann, Anti- Ackermann, Parallel Ackermann. Full vehicle analysis was carried out on Adams Car Analysis software, and graphical results were obtained for various parameters. Steering geometries were achieved by using a single versatile knuckle for frontward and rearward tie-rod placement and were practically tested with the help of data acquisition systems set up on the ATV. Each was having certain characteristics, setup, and parameters were observed for the BAJA ATV, and correlations were created between analytical and practical values.

Keywords: all-terrain vehicle, Ackermann, Adams car, Baja Sae, steering geometry, steering system, tire slip, traction, understeer gradient

Procedia PDF Downloads 146
31376 Evaluation of Symptoms, Laboratory Findings, and Natural History of IgE Mediated Wheat Allergy

Authors: Soudeh Tabashi, Soudabeh Fazeli Dehkordy, Masood Movahedi, Nasrin Behniafard

Abstract:

Introduction: Food allergy has increased in three last decades. Since wheat is one of the major constituents of daily meal in many regions throughout the world, wheat allergy is one of the most important allergies ranking among the 8 most common types of food allergies. Our information about epidemiology and etiology of food allergies are limited. Therefore, in this study we sought to evaluate the symptoms and laboratory findings in children with wheat allergy. Materials and methods: There were 23 patients aged up to 18 with the diagnosis of IgE mediated wheat allergy that were included enrolled in this study. Using a questionnaire .we collected their information and organized them into 4 groups categories of: demographic data identification, signs and symptoms, comorbidities, and laboratory data. Then patients were followed up for 6 month and their lab data were compared together. Results: Most of the patients (82%) presented the symptoms of wheat allergy in the first year of their life. The skin and the respiratory system were the most commonly involved organs with an incidence of 86% and 78% respectively. Most of the patients with wheat allergy were also sensitive to the other type of foods and their sensitivity to egg were most common type (47%). in 57% of patients, IgE levels were decreased during the 6 month follow-up period. Conclusion: We do not have enough information about data on epidemiology and response to therapy of wheat allergy and to best of our knowledge no study has addressed this issue in Iran so far. This study is the first source of information about IgE mediated wheat allergy in Iran and It can provide an opening for future studies about wheat allergy and its treatments.

Keywords: wheat allergy, food allergy, IgE, food allergy

Procedia PDF Downloads 189
31375 The Use Management of the Knowledge Management and the Information Technologies in the Competitive Strategy of a Self-Propelling Industry

Authors: Guerrero Ramírez Sandra, Ramos Salinas Norma Maricela, Muriel Amezcua Vanesa

Abstract:

This article presents the beginning of a wider study that intends to demonstrate how within organizations of the automotive industry from the city of Querétaro. Knowledge management and technological management are required, as well as people’s initiative and the interaction embedded at the interior of it, with the appropriate environment that facilitates information conversion with wide information technologies management (ITM) range. A company was identified for the pilot study of this research, where descriptive and inferential research information was obtained. The results of the pilot suggest that some respondents did noted entity the knowledge management topic, even if staffs have access to information technology (IT) that serve to enhance access to knowledge (through internet, email, databases, external and internal company personnel, suppliers, customers and competitors) data, this implicates that there are Knowledge Management (KM) problems. The data shows that academically well-prepared organizations normally do not recognize the importance of knowledge in the business, nor in the implementation of it, which at the end is a great influence on how to manage it, so that it should guide the company to greater in sight towards a competitive strategy search, given that the company has an excellent technological infrastructure and KM was not exploited. Cultural diversity is another factor that was observed by the staff.

Keywords: Knowledge Management (KM), Technological Knowledge Management (TKM), Technology Information Management (TI), access to knowledge

Procedia PDF Downloads 494
31374 Inadequacy and Inefficiency of the Scoping Requirements in the Preparation of Environmental Impact Assessment Reports for Dam and Reservoir Projects in Thailand

Authors: Natsuda Rattamanee

Abstract:

Like other countries, Thailand continually experiences strong protests against dam and reservoir proposals, especially large-scale projects. The protestors are constantly worried about the potential significant adverse impacts of the projects on the environment and society. Although project proponents are required by laws to assess the environmental and social impacts of the dam proposals by making environmental impact assessment (EIA) reports and finding mitigation measures before implementing the plans, the outcomes of the assessments often do not lessen the affected people and public’s concerns about the potential negative effects of the projects. One of the main reasons is that Thailand does not have a proper and efficient law to regulate project proponents when determining the scope of environmental impact assessments. Scoping is the crucial second stage of the preparation of an EIA report. The appropriate scope of assessments will allow EIA studies to focus only on the significant effects of the proposed project on particular resources, areas, and communities. It will offer crucial and sufficient information to the decision-makers and the public. The decision to implement the dam and reservoir projects considered based on the assessments with a proper scoping will eventually be more widely accepted by the public and reduce community opposition. The research work seeks to identify flaws in the current requirements of scoping steps under Thai laws and regulations and proposes recommendations to improve the legal scheme. The paper explores the well-established United States laws and relevant rules regulating how lead agencies determine the scope of their environmental impact assessments and some guidelines concerning scoping published by dominant institutions. Policymakers and legislature will find the results of studies helpful in improving the scoping-step requirements of EIA for dam and reservoir projects and reducing the level of anti-dam protests in Thailand.

Keywords: dam and reservoir, EIA, environmental impact assessment, law, scoping, Thailand

Procedia PDF Downloads 83
31373 Realization of Autonomous Guidance Service by Integrating Information from NFC and MEMS

Authors: Dawei Cai

Abstract:

In this paper, we present an autonomous guidance service by combining the position information from NFC and the orientation information from a 6 axis acceleration and terrestrial magnetism sensor. We developed an algorithm to calculate the device orientation based on the data from acceleration and terrestrial magnetism sensor. If visitors want to know some explanation about an exhibit in front of him, what he has to do is just lift up his mobile device. The identification program will automatically identify the status based on the information from NFC and MEMS, and start playing explanation content for him. This service may be convenient for old people or disables or children.

Keywords: NFC, ubiquitous computing, guide sysem, MEMS

Procedia PDF Downloads 402
31372 An Improved Parallel Algorithm of Decision Tree

Authors: Jiameng Wang, Yunfei Yin, Xiyu Deng

Abstract:

Parallel optimization is one of the important research topics of data mining at this stage. Taking Classification and Regression Tree (CART) parallelization as an example, this paper proposes a parallel data mining algorithm based on SSP-OGini-PCCP. Aiming at the problem of choosing the best CART segmentation point, this paper designs an S-SP model without data association; and in order to calculate the Gini index efficiently, a parallel OGini calculation method is designed. In addition, in order to improve the efficiency of the pruning algorithm, a synchronous PCCP pruning strategy is proposed in this paper. In this paper, the optimal segmentation calculation, Gini index calculation, and pruning algorithm are studied in depth. These are important components of parallel data mining. By constructing a distributed cluster simulation system based on SPARK, data mining methods based on SSP-OGini-PCCP are tested. Experimental results show that this method can increase the search efficiency of the best segmentation point by an average of 89%, increase the search efficiency of the Gini segmentation index by 3853%, and increase the pruning efficiency by 146% on average; and as the size of the data set increases, the performance of the algorithm remains stable, which meets the requirements of contemporary massive data processing.

Keywords: classification, Gini index, parallel data mining, pruning ahead

Procedia PDF Downloads 116
31371 Continuance Intention to Use E-administration Information Portal by Non-teaching Staff in Selected Universities, Southwest, Nigeria

Authors: Adebayo Muritala Adegbore

Abstract:

The e-administration is increasingly being recognized as an important phenomenon in this 21st century and its place in society both at the public and private levels cannot be downplayed. Of close attention is how these platforms are adopted and used in academia due to academia’s role in shaping the overall development of the society, particularly the administrative activities of the non-teaching staff in universities since much has not been done to find out the continuance intention to use e-administration information portal by non-teaching staff in universities. This study, therefore, investigates the continuance intention to use e-administration of information portals of senior non-teaching staff in selected universities in southwest Nigeria. The study’s design was a correlational survey using simple random sampling to select three hundred and fifty-two (352) senior non-teaching staff in the selected universities. A standardized questionnaire was used for data capturing while data were analyzed using the descriptive statistics of frequency counts, percentages, means, and standard deviation for the research questions and the Pearson Product Moment Correlation was used for the hypothesis. Findings revealed that the continuance intention of senior non-teaching staff to use e-administration information portal is positive (x = 3.13), the university portal is one of the most utilized e-administration tools (83.4%), while there was an inversely significant relationship between continuance intention to use and use of e-administration information portal (r = -.254; p< 0.05; N = 320).

Keywords: e-administration, e-portal, non-teaching staff, information systems, continuance intention, use of e-administration portals

Procedia PDF Downloads 185