Search results for: open source data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30070

Search results for: open source data

26620 Dialysis Access Surgery for Patients in Renal Failure: A 10-Year Institutional Experience

Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Bookun

Abstract:

Introduction: Dialysis access is a key component of the care of patients with end stage renal failure. In our institution, a combined service of vascular surgeons and nephrologists are responsible for the creation and maintenance of arteriovenous fisultas (AVF), tenckhoff cathethers and Hickman/permcath lines. This poster investigates the last 10 years of dialysis access surgery conducted at St. Vincent’s Hospital Melbourne. Method: A cross-sectional retrospective analysis was conducted of patients of St. Vincent’s Hospital Melbourne (Victoria, Australia) utilising data collection from the Australasian Vascular Audit (Australian and New Zealand Society for Vascular Surgery). Descriptive demographic analysis was carried out as well as operation type, length of hospital stays, postoperative deaths and need for reoperation. Results: 2085 patients with renal failure were operated on between the years of 2011 and 2020. 1315 were male (63.1%) and 770 were female (36.9%). The mean age was 58 (SD 13.8). 92% of patients scored three or greater on the American Society of Anesthiologiests classification system. Almost half had a history of ischaemic heart disease (48.4%), more than half had a history of diabetes (64%), and a majority had hypertension (88.4%). 1784 patients had a creatinine over 150mmol/L (85.6%), the rest were on dialysis (14.4%). The most common access procedure was AVF creation, with 474 autologous AVFs and 64 prosthetic AVFs. There were 263 Tenckhoff insertions. We performed 160 cadeveric renal transplants. The most common location for AVF formation was brachiocephalic (43.88%) followed by radiocephalic (36.7%) and brachiobasilic (16.67%). Fistulas that required re-intervention were most commonly angioplastied (n=163), followed by thrombectomy (n=136). There were 107 local fistula repairs. Average length of stay was 7.6 days, (SD 12). There were 106 unplanned returns to theatre, most commonly for fistula creation, insertion of tenckhoff or permacath removal (71.7%). There were 8 deaths in the immediately postoperative period. Discussion: Access to dialysis is vital for patients with end stage kidney disease, and requires a multidisciplinary approach from both nephrologists, vascular surgeons, and allied health practitioners. Our service provides a variety of dialysis access methods, predominately fistula creation and tenckhoff insertion. Patients with renal failure are heavily comorbid, and prolonged hospital admission following surgery is a source of significant healthcare expenditure. AVFs require careful monitoring and maintenance for ongoing utility, and our data reflects a multitude of operations required to maintain usable access. The requirement for dialysis is growing worldwide and our data demonstrates a local experience in access, with preferred methods, common complications and the associated surgical interventions.

Keywords: dialysis, fistula, nephrology, vascular surgery

Procedia PDF Downloads 113
26619 Grammatical Forms and Functions in Selected Political Interviews of Nigerian Presidential Aspirants in 2015 General Election

Authors: Temitope Abiodun Balogun

Abstract:

Political interviews are one of the ways by which political office-seekers in Nigeria sell themselves to the electorates. Extant studies have examined the discourse of political interviews from conversational, philosophical, rhetorical, stylistic and pragmatic perspectives with insufficient attention paid to grammatical forms and communicative intentions of the interviews granted by the two presidential aspirants in the 2015 Nigerian general election. This study fills this scholarly gap to unmask their grammatical forms and communicative styles, intention and credibility. The paper adopts Halliday’s Systemic Functional Grammar, specifically interpersonal function coupled with Searle’s Model of Speech Acts Theory as a theoretical framework. A total of six interviews granted by the two presidential aspirants in media serve as the source of data. It is discovered that, in most cases, politicians’ communicative intention is to “pull-down” their political opponents. While declarative and interrogatives are simple, direct and straightforward, the intention is to condemn, lambast and castigate their opponents. This communicative style does not allow the general populace to decipher the political manifestoes of the political aspirants and the party they represent. The paper recommends that before Nigeria can boast of any sustainable growth and development, there is the need for her political office-seekers to adopt effective communication strategies and styles to unveil their intention and manifestoes so that electorates can evaluate their performance after their tenure of office.

Keywords: general election, grammatical forms and function, political interviews, presidential aspirants

Procedia PDF Downloads 161
26618 DCASH: Dynamic Cache Synchronization Algorithm for Heterogeneous Reverse Y Synchronizing Mobile Database Systems

Authors: Gunasekaran Raja, Kottilingam Kottursamy, Rajakumar Arul, Ramkumar Jayaraman, Krithika Sairam, Lakshmi Ravi

Abstract:

The synchronization server maintains a dynamically changing cache, which contains the data items which were requested and collected by the mobile node from the server. The order and presence of tuples in the cache changes dynamically according to the frequency of updates performed on the data, by the server and client. To synchronize, the data which has been modified by client and the server at an instant are collected, batched together by the type of modification (insert/ update/ delete), and sorted according to their update frequencies. This ensures that the DCASH (Dynamic Cache Synchronization Algorithm for Heterogeneous Reverse Y synchronizing Mobile Database Systems) gives priority to the frequently accessed data with high usage. The optimal memory management algorithm is proposed to manage data items according to their frequency, theorems were written to show the current mobile data activity is reverse Y in nature and the experiments were tested with 2g and 3g networks for various mobile devices to show the reduced response time and energy consumption.

Keywords: mobile databases, synchronization, cache, response time

Procedia PDF Downloads 405
26617 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 256
26616 Unified Structured Process for Health Analytics

Authors: Supunmali Ahangama, Danny Chiang Choon Poo

Abstract:

Health analytics (HA) is used in healthcare systems for effective decision-making, management, and planning of healthcare and related activities. However, user resistance, the unique position of medical data content, and structure (including heterogeneous and unstructured data) and impromptu HA projects have held up the progress in HA applications. Notably, the accuracy of outcomes depends on the skills and the domain knowledge of the data analyst working on the healthcare data. The success of HA depends on having a sound process model, effective project management and availability of supporting tools. Thus, to overcome these challenges through an effective process model, we propose an HA process model with features from the rational unified process (RUP) model and agile methodology.

Keywords: agile methodology, health analytics, unified process model, UML

Procedia PDF Downloads 506
26615 Use of Life Cycle Data for State-Oriented Maintenance

Authors: Maximilian Winkens, Matthias Goerke

Abstract:

The state-oriented maintenance enables the preventive intervention before the failure of a component and guarantees avoidance of expensive breakdowns. Because the timing of the maintenance is defined by the component’s state, the remaining service life can be exhausted to the limit. The basic requirement for the state-oriented maintenance is the ability to define the component’s state. New potential for this is offered by gentelligent components. They are developed at the Corporative Research Centre 653 of the German Research Foundation (DFG). Because of their sensory ability they enable the registration of stresses during the component’s use. The data is gathered and evaluated. The methodology developed determines the current state of the gentelligent component based on the gathered data. This article presents this methodology as well as current research. The main focus of the current scientific work is to improve the quality of the state determination based on the life-cycle data analysis. The methodology developed until now evaluates the data of the usage phase and based on it predicts the timing of the gentelligent component’s failure. The real failure timing though, deviate from the predicted one because the effects from the production phase aren’t considered. The goal of the current research is to develop a methodology for state determination which considers both production and usage data.

Keywords: state-oriented maintenance, life-cycle data, gentelligent component, preventive intervention

Procedia PDF Downloads 495
26614 A Hybrid System for Boreholes Soil Sample

Authors: Ali Ulvi Uzer

Abstract:

Data reduction is an important topic in the field of pattern recognition applications. The basic concept is the reduction of multitudinous amounts of data down to the meaningful parts. The Principal Component Analysis (PCA) method is frequently used for data reduction. The Support Vector Machine (SVM) method is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data, the algorithm outputs an optimal hyperplane which categorizes new examples. This study offers a hybrid approach that uses the PCA for data reduction and Support Vector Machines (SVM) for classification. In order to detect the accuracy of the suggested system, two boreholes taken from the soil sample was used. The classification accuracies for this dataset were obtained through using ten-fold cross-validation method. As the results suggest, this system, which is performed through size reduction, is a feasible system for faster recognition of dataset so our study result appears to be very promising.

Keywords: feature selection, sequential forward selection, support vector machines, soil sample

Procedia PDF Downloads 455
26613 Predicting Customer Purchasing Behaviour in Retail Marketing: A Research for a Supermarket Chain

Authors: Sabri Serkan Güllüoğlu

Abstract:

Analysis can be defined as the process of gathering, recording and researching data related to products and services, in order to learn something. But for marketers, analyses are not only used for learning but also an essential and critical part of the business, because this allows companies to offer products or services which are focused and well targeted. Market analysis also identify market trends, demographics, customer’s buying habits and important information on the competition. Data mining is used instead of traditional research, because it extracts predictive information about customer and sales from large databases. In contrast to traditional research, data mining relies on information that is already available. Simply the goal is to improve the efficiency of supermarkets. In this study, the purpose is to find dependency on products. For instance, which items are bought together, using association rules in data mining. Moreover, this information will be used for improving the profitability of customers such as increasing shopping time and sales of fewer sold items.

Keywords: data mining, association rule mining, market basket analysis, purchasing

Procedia PDF Downloads 483
26612 Social Enterprises in India: Conceptualization and Challenges

Authors: Prajakta Khare

Abstract:

There is a huge number of social enterprises operating in India, across all enterprise sizes and forms addressing diverse social issues. Some cases such as such as Aravind eye care, Narayana Hridalaya, SEWA have been studied extensively in management literature and are known cases in social entrepreneurship. But there are several smaller social enterprises in India that are not called so per se due to the lack of understanding of the concept. There is a lack of academic research on social entrepreneurship in India and the term ‘social entrepreneurship’ is not yet widely known in the country, even by people working in this field as was found by this study. The present study aims to identify the most prominent form of social enterprises in India, the profile of the entrepreneurs, challenges faced, the lessons (theory and practices) emerging from their functioning and finally the factors contributing to the enterprises’ success. This is a preliminary exploratory study using primary data from 30 social enterprises in India. The study used snow ball sampling and a qualitative analysis. Data was collected from founders of social enterprises through written structured questionnaires, open-ended interviews and field visits to enterprises. The sample covered enterprises across sectors such as environment, affordable education, children’s rights, rain water harvesting, women empowerment etc. The interview questions focused on founder’s background and motivation, qualifications, funding, challenges, founder’s understanding and perspectives on social entrepreneurship, government support, linkages with other organizations etc. apart from several others. The interviews were conducted across 3 languages - Hindi, Marathi, English and were then translated and transcribed. 50% of founders were women and 65% of the total founders were highly qualified with a MBA, PhD or MBBS. The most important challenge faced by these entrepreneurs is recruiting skilled people. When asked about their understanding of the term, founders had diverse perspectives. Also, their understandings about the term social enterprise and social entrepreneur were extremely varied. Some founders identified the terms with doing something good for the society, some thought that every business can be called a social enterprise. 35% of the founders were not aware of the term social entrepreneur/ social entrepreneurship. They said that they could identify themselves as social entrepreneurs after discussions with the researcher. The general perception in India is that ‘NGOs are corrupt’- fighting against this perception to secure funds is also another problem as pointed out by some founders. There are unique challenges that social entrepreneurs in India face, as the political, social, economic environment around them is rapidly changing; and getting adequate support from the government is a problem. The research in its subsequent stages aims to clarify existing, missing and new definitions of the term to provide deeper insights in the terminology and issues relating to Social Entrepreneurship in India.

Keywords: challenges, India, social entrepreneurship, social entrepreneurs

Procedia PDF Downloads 467
26611 Hamiltonian Related Properties with and without Faults of the Dual-Cube Interconnection Network and Their Variations

Authors: Shih-Yan Chen, Shin-Shin Kao

Abstract:

In this paper, a thorough review about dual-cubes, DCn, the related studies and their variations are given. DCn was introduced to be a network which retains the pleasing properties of hypercube Qn but has a much smaller diameter. In fact, it is so constructed that the number of vertices of DCn is equal to the number of vertices of Q2n +1. However, each vertex in DCn is adjacent to n + 1 neighbors and so DCn has (n + 1) × 2^2n edges in total, which is roughly half the number of edges of Q2n+1. In addition, the diameter of any DCn is 2n +2, which is of the same order of that of Q2n+1. For selfcompleteness, basic definitions, construction rules and symbols are provided. We chronicle the results, where eleven significant theorems are presented, and include some open problems at the end.

Keywords: dual-cubes, dual-cube extensive networks, dual-cube-like networks, hypercubes, fault-tolerant hamiltonian property

Procedia PDF Downloads 470
26610 Predicting Medical Check-Up Patient Re-Coming Using Sequential Pattern Mining and Association Rules

Authors: Rizka Aisha Rahmi Hariadi, Chao Ou-Yang, Han-Cheng Wang, Rajesri Govindaraju

Abstract:

As the increasing of medical check-up popularity, there are a huge number of medical check-up data stored in database and have not been useful. These data actually can be very useful for future strategic planning if we mine it correctly. In other side, a lot of patients come with unpredictable coming and also limited available facilities make medical check-up service offered by hospital not maximal. To solve that problem, this study used those medical check-up data to predict patient re-coming. Sequential pattern mining (SPM) and association rules method were chosen because these methods are suitable for predicting patient re-coming using sequential data. First, based on patient personal information the data was grouped into … groups then discriminant analysis was done to check significant of the grouping. Second, for each group some frequent patterns were generated using SPM method. Third, based on frequent patterns of each group, pairs of variable can be extracted using association rules to get general pattern of re-coming patient. Last, discussion and conclusion was done to give some implications of the results.

Keywords: patient re-coming, medical check-up, health examination, data mining, sequential pattern mining, association rules, discriminant analysis

Procedia PDF Downloads 640
26609 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification

Authors: Samiah Alammari, Nassim Ammour

Abstract:

When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on HSI dataset Indian Pines. The results confirm the capability of the proposed method.

Keywords: continual learning, data reconstruction, remote sensing, hyperspectral image segmentation

Procedia PDF Downloads 266
26608 Configuration as a Service in Multi-Tenant Enterprise Resource Planning System

Authors: Mona Misfer Alshardan, Djamal Ziani

Abstract:

Enterprise resource planning (ERP) systems are the organizations tickets to the global market. With the implementation of ERP, organizations can manage and coordinate all functions, processes, resources and data from different departments by a single software. However, many organizations consider the cost of traditional ERP to be expensive and look for alternative affordable solutions within their budget. One of these alternative solutions is providing ERP over a software as a service (SaaS) model. This alternative could be considered as a cost effective solution compared to the traditional ERP system. A key feature of any SaaS system is the multi-tenancy architecture where multiple customers (tenants) share the system software. However, different organizations have different requirements. Thus, the SaaS developers accommodate each tenant’s unique requirements by allowing tenant-level customization or configuration. While customization requires source code changes and in most cases a programming experience, the configuration process allows users to change many features within a predefined scope in an easy and controlled manner. The literature provides many techniques to accomplish the configuration process in different SaaS systems. However, the nature and complexity of SaaS ERP needs more attention to the details regarding the configuration process which is merely described in previous researches. Thus, this research is built on strong knowledge regarding the configuration in SaaS to define specifically the configuration borders in SaaS ERP and to design a configuration service with the consideration of the different configuration aspects. The proposed architecture will ensure the easiness of the configuration process by using wizard technology. Also, the privacy and performance are guaranteed by adopting the databases isolation technique.

Keywords: configuration, software as a service, multi-tenancy, ERP

Procedia PDF Downloads 393
26607 Local Differential Privacy-Based Data-Sharing Scheme for Smart Utilities

Authors: Veniamin Boiarkin, Bruno Bogaz Zarpelão, Muttukrishnan Rajarajan

Abstract:

The manufacturing sector is a vital component of most economies, which leads to a large number of cyberattacks on organisations, whereas disruption in operation may lead to significant economic consequences. Adversaries aim to disrupt the production processes of manufacturing companies, gain financial advantages, and steal intellectual property by getting unauthorised access to sensitive data. Access to sensitive data helps organisations to enhance the production and management processes. However, the majority of the existing data-sharing mechanisms are either susceptible to different cyber attacks or heavy in terms of computation overhead. In this paper, a privacy-preserving data-sharing scheme for smart utilities is proposed. First, a customer’s privacy adjustment mechanism is proposed to make sure that end-users have control over their privacy, which is required by the latest government regulations, such as the General Data Protection Regulation. Secondly, a local differential privacy-based mechanism is proposed to ensure the privacy of the end-users by hiding real data based on the end-user preferences. The proposed scheme may be applied to different industrial control systems, whereas in this study, it is validated for energy utility use cases consisting of smart, intelligent devices. The results show that the proposed scheme may guarantee the required level of privacy with an expected relative error in utility.

Keywords: data-sharing, local differential privacy, manufacturing, privacy-preserving mechanism, smart utility

Procedia PDF Downloads 76
26606 Changes in the Subjective Interpretation of Poverty Due to COVID-19: The Case of a Peripheral County of Hungary

Authors: Eszter Siposne Nandori

Abstract:

The paper describes how the subjective interpretation of poverty changed during the COVID-19 pandemic. The results of data collection at the end of 2020 are compared to the results of a similar survey from 2019. The methods of systematic data collection are used to collect data about the beliefs of the population about poverty. The analysis is carried out in Borsod-Abaúj-Zemplén County, one of the most backward areas in Hungary. The paper concludes that poverty is mainly linked to material values, and it did not change from 2019 to 2020. Some slight changes, however, highlight the effect of the pandemic: poverty is increasingly seen as a generational problem in 2020, and another important change is that isolation became more closely related to poverty.

Keywords: Hungary, interpretation of poverty, pandemic, systematic data collection, subjective poverty

Procedia PDF Downloads 126
26605 An Encapsulation of a Navigable Tree Position: Theory, Specification, and Verification

Authors: Nicodemus M. J. Mbwambo, Yu-Shan Sun, Murali Sitaraman, Joan Krone

Abstract:

This paper presents a generic data abstraction that captures a navigable tree position. The mathematical modeling of the abstraction encapsulates the current tree position, which can be used to navigate and modify the tree. The encapsulation of the tree position in the data abstraction specification avoids the use of explicit references and aliasing, thereby simplifying verification of (imperative) client code that uses the data abstraction. To ease the tasks of such specification and verification, a general tree theory, rich with mathematical notations and results, has been developed. The paper contains an example to illustrate automated verification ramifications. With sufficient tree theory development, automated proving seems plausible even in the absence of a special-purpose tree solver.

Keywords: automation, data abstraction, maps, specification, tree, verification

Procedia PDF Downloads 166
26604 Accurate Position Electromagnetic Sensor Using Data Acquisition System

Authors: Z. Ezzouine, A. Nakheli

Abstract:

This paper presents a high position electromagnetic sensor system (HPESS) that is applicable for moving object detection. The authors have developed a high-performance position sensor prototype dedicated to students’ laboratory. The challenge was to obtain a highly accurate and real-time sensor that is able to calculate position, length or displacement. An electromagnetic solution based on a two coil induction principal was adopted. The HPESS converts mechanical motion to electric energy with direct contact. The output signal can then be fed to an electronic circuit. The voltage output change from the sensor is captured by data acquisition system using LabVIEW software. The displacement of the moving object is determined. The measured data are transmitted to a PC in real-time via a DAQ (NI USB -6281). This paper also describes the data acquisition analysis and the conditioning card developed specially for sensor signal monitoring. The data is then recorded and viewed using a user interface written using National Instrument LabVIEW software. On-line displays of time and voltage of the sensor signal provide a user-friendly data acquisition interface. The sensor provides an uncomplicated, accurate, reliable, inexpensive transducer for highly sophisticated control systems.

Keywords: electromagnetic sensor, accurately, data acquisition, position measurement

Procedia PDF Downloads 285
26603 The Quality of the Presentation Influence the Audience Perceptions

Authors: Gilang Maulana, Dhika Rahma Qomariah, Yasin Fadil

Abstract:

Purpose: This research meant to measure the magnitude of the influence of the quality of the presentation to the targeted audience perception in catching information presentation. Design/Methodology/Approach: This research uses a quantitative research method. The kind of data that uses in this research is the primary data. The population in this research are students the economics faculty of Semarang State University. The sampling techniques uses in this research is purposive sampling. The retrieving data uses questionnaire on 30 respondents. The data analysis uses descriptive analysis. Result: The quality of presentation influential positive against perception of the audience. This proved that the more qualified presentation will increase the perception of the audience. Limitation: Respondents were limited to only 30 people.

Keywords: quality of presentation, presentation, audience, perception, semarang state university

Procedia PDF Downloads 392
26602 Collaborative Team Work in Higher Education: A Case Study

Authors: Swapna Bhargavi Gantasala

Abstract:

If teamwork is the key to organizational learning, productivity, and growth, then, why do some teams succeed in achieving these, while others falter at different stages? Building teams in higher education institutions has been a challenge and an open-ended constructivist approach was considered on an experimental basis for this study to address this challenge. For this research, teams of students from the MBA program were chosen to study the effect of teamwork in learning, the motivation levels among student team members, and the effect of collaboration in achieving team goals. The teams were built on shared vision and goals, cohesion was ensured, positive induction in the form of faculty mentoring was provided for each participating team and the results have been presented with conclusions and suggestions.

Keywords: teamwork, leadership, motivation and reinforcement, collaboration

Procedia PDF Downloads 377
26601 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 112
26600 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials

Authors: Sheikh Omar Sillah

Abstract:

Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.

Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring

Procedia PDF Downloads 76
26599 Usability Evaluation of a Self-Report Mobile App for COVID-19 Symptoms: Supporting Health Monitoring in the Work Context

Authors: Kevin Montanez, Patricia Garcia

Abstract:

The confinement and restrictions adopted to avoid an exponential spread of the COVID-19 have negatively impacted the Peruvian economy. In this context, Industries offering essential products could continue operating, but they have to follow safety protocols and implement strategies to ensure employee health. In view of the increasing internet access and mobile phone ownership, “Alerta Temprana”, a mobile app, was developed to self-report COVID-19 symptoms in the work context. In this study, the usability of the mobile app “Alerta Temprana” was evaluated from the perspective of health monitors and workers. In addition to reporting the metrics related to the usability of the application, the utility of the system is also evaluated from the monitors' perspective. In this descriptive study, the participants used the mobile app for two months. Afterwards, System Usability Scale (SUS) questionnaire was answered by the workers and monitors. A Usefulness questionnaire with open questions was also used for the monitors. The data related to the use of the application was collected during one month. Furthermore, descriptive statistics and bivariate analysis were used. The workers rated the application as good (70.39). In the case of the monitors, usability was excellent (83.0). The most important feature for the monitors were the emails generated by the application. The average interaction per user was 30 seconds and a total of 6172 self-reports were sent. Finally, a statistically significant association was found between the acceptability scale and the work area. The results of this study suggest that Alerta Temprana has the potential to be used for surveillance and health monitoring in any context of face-to-face modality. Participants reported a high degree of ease of use. However, from the perspective of workers, SUS cannot diagnose usability issues and we suggest we use another standard usability questionnaire to improve "Alerta Temprana" for future use.

Keywords: public health in informatics, mobile app, usability, self-report

Procedia PDF Downloads 117
26598 An ALM Matrix Completion Algorithm for Recovering Weather Monitoring Data

Authors: Yuqing Chen, Ying Xu, Renfa Li

Abstract:

The development of matrix completion theory provides new approaches for data gathering in Wireless Sensor Networks (WSN). The existing matrix completion algorithms for WSN mainly consider how to reduce the sampling number without considering the real-time performance when recovering the data matrix. In order to guarantee the recovery accuracy and reduce the recovery time consumed simultaneously, we propose a new ALM algorithm to recover the weather monitoring data. A lot of experiments have been carried out to investigate the performance of the proposed ALM algorithm by using different parameter settings, different sampling rates and sampling models. In addition, we compare the proposed ALM algorithm with some existing algorithms in the literature. Experimental results show that the ALM algorithm can obtain better overall recovery accuracy with less computing time, which demonstrate that the ALM algorithm is an effective and efficient approach for recovering the real world weather monitoring data in WSN.

Keywords: wireless sensor network, matrix completion, singular value thresholding, augmented Lagrange multiplier

Procedia PDF Downloads 384
26597 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 338
26596 Investigation into the Socio-ecological Impact of Migration of Fulani Herders in Anambra State of Nigeria Through a Climate Justice Lens

Authors: Anselm Ego Onyimonyi, Maduako Johnpaul O.

Abstract:

The study was designed to investigate into the socio-ecological impact of migration of Fulani herders in Anambra state of Nigeria, through a climate justice lens. Nigeria is one of the world’s most densely populated countries with a population of over 284 million people, half of which are considered to be in abject poverty. There is no doubt that livestock production provides sustainable contributions to food security and poverty reduction to Nigeria economy, but not without some environmental implications like any other economic activities. Nigeria is recognized as being vulnerable to climate change. Climate change and global warming if left unchecked will cause adverse effects on livelihoods in Nigeria, such as livestock production, crop production, fisheries, forestry and post-harvest activities, because the rainfall regimes and patterns will be altered, floods which devastate farmlands would occur, increase in temperature and humidity which increases pest and disease would occur and other natural disasters like desertification, drought, floods, ocean and storm surges, which not only damage Nigerians’ livelihood but also cause harm to life and property, would occur. This and other climatic issue as it affects Fulani herdsmen was what this study investigated. In carrying out this research, a survey research design was adopted. A simple sampling technique was used. One local government area (LGA) was selected purposively from each of the four agricultural zone in the state based on its predominance of Fulani herders. For appropriate sampling, 25 respondents from each of the four Agricultural zones in the state were randomly selected making up the 100 respondent being sampled. Primary data were generated by using a set of structured 5-likert scale questionnaire. Data generated were analyzed using SPSS and the result presented using descriptive statistics. From the data analyzed, the study indentified; Unpredicted rainfall (mean = 3.56), Forest fire (mean = 4.63), Drying Water Source (mean = 3.99), Dwindling Grazing (mean 4.43), Desertification (mean = 4.44), Fertile land scarcity (mean = 3.42) as major factor predisposing Fulani herders to migrate southward while rejecting Natural inclination to migrate (mean = 2.38) and migration to cause trouble as a factor. On the reason why Fulani herders are trying to establish a permanent camp in Anambra state; Moderate temperature (mean= 3.60), Avoiding overgrazing (4.42), Search for fodder and water (mean = 4.81) and (mean = 4.70) respectively, Need for market (4.28), Favorable environment (mean = 3.99) and Access to fertile land (3.96) were identified. It was concluded that changing climatic variables necessitated the migration of herders from Northern Nigeria to areas in the South were the variables are most favorable to the herders and their animals.

Keywords: socio-ecological, migration, fulani, climate, justice, lens

Procedia PDF Downloads 44
26595 Effect of Highway Construction on Soil Properties and Soil Organic Carbon (Soc) Along Lagos-Badagry Expressway, Lagos, Nigeria

Authors: Fatai Olakunle Ogundele

Abstract:

Road construction is increasingly common in today's world as human development expands and people increasingly rely on cars for transportation on a daily basis. The construction of a large network of roads has dramatically altered the landscape and impacted well-being in a number of deleterious ways. In addition, the road can also shift population demographics and be a source of pollution into the environment. Road construction activities normally result in changes in alteration of the soil's physical properties through soil compaction on the road itself and on adjacent areas and chemical and biological properties, among other effects. Understanding roadside soil properties that are influenced by road construction activities can serve as a basis for formulating conservation-based management strategies. Therefore, this study examined the effects of road construction on soil properties and soil organic carbon along Lagos Badagry Expressway, Lagos, Nigeria. The study adopted purposive sampling techniques and 40 soil samples were collected at a depth of 0 – 30cm from each of the identified road intersections and infrastructures using a soil auger. The soil samples collected were taken to the laboratory for soil properties and carbon stock analysis using standard methods. Both descriptive and inferential statistical techniques were applied to analyze the data obtained. The results revealed that soil compaction inhibits ecological succession on roadsides in that increased compaction suppresses plant growth as well as causes changes in soil quality.

Keywords: highway, soil properties, organic carbon, road construction, land degradation

Procedia PDF Downloads 80
26594 Field Production Data Collection, Analysis and Reporting Using Automated System

Authors: Amir AlAmeeri, Mohamed Ibrahim

Abstract:

Various data points are constantly being measured in the production system, and due to the nature of the wells, these data points, such as pressure, temperature, water cut, etc.., fluctuations are constant, which requires high frequency monitoring and collection. It is a very difficult task to analyze these parameters manually using spreadsheets and email. An automated system greatly enhances efficiency, reduce errors, the need for constant emails which take up disk space, and frees up time for the operator to perform other critical tasks. Various production data is being recorded in an oil field, and this huge volume of data can be seen as irrelevant to some, especially when viewed on its own with no context. In order to fully utilize all this information, it needs to be properly collected, verified and stored in one common place and analyzed for surveillance and monitoring purposes. This paper describes how data is recorded by different parties and departments in the field, and verified numerous times as it is being loaded into a repository. Once it is loaded, a final check is done before being entered into a production monitoring system. Once all this is collected, various calculations are performed to report allocated production. Calculated production data is used to report field production automatically. It is also used to monitor well and surface facility performance. Engineers can use this for their studies and analyses to ensure field is performing as it should be, predict and forecast production, and monitor any changes in wells that could affect field performance.

Keywords: automation, oil production, Cheleken, exploration and production (E&P), Caspian Sea, allocation, forecast

Procedia PDF Downloads 156
26593 Time Series Regression with Meta-Clusters

Authors: Monika Chuchro

Abstract:

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain a subgroups of time series data with normal distribution from inflow into waste water treatment plant data which Composed of several groups differing by mean value. Two simple algorithms: K-mean and EM were chosen as a clustering method. The rand index was used to measure the similarity. After simple meta-clustering, regression model was performed for each subgroups. The final model was a sum of subgroups models. The quality of obtained model was compared with the regression model made using the same explanatory variables but with no clustering of data. Results were compared by determination coefficient (R2), measure of prediction accuracy mean absolute percentage error (MAPE) and comparison on linear chart. Preliminary results allows to foresee the potential of the presented technique.

Keywords: clustering, data analysis, data mining, predictive models

Procedia PDF Downloads 466
26592 Mathematical Anxiety and Misconceptions in Algebra of Grade Vii Students in General Emilio Aguinaldo National High School

Authors: Nessa-Amie T. Peñaflor, Antonio Cinto

Abstract:

This is a descriptive research on the level of math anxiety and mathematics misconceptions in algebra. This research is composed of four parts: (1) analysis of the level of anxiety of the respondents; (2) analysis of the common mathematical misconceptions in algebra; (3) relationship of socio-demographic profile in math anxiety and mathematical misconceptions and (4) analysis of the relationship of math anxiety and misconceptions in algebra. Through the demographic profile questionnaire it was found out that most of the respondents were female. Majority had ages that ranged from 13-15. Most of them had parents who finished secondary education. The biggest portion of Grade Seven students where from families with annual family income ranging from PhP 100, 000 to PhP 299, 999. Most of them came from public school. Mathematics Anxiety Scale for Secondary and Senior Secondary School Students (MAS) and set of 10 open-ended algebraic expressions and polynomials were also administered to determine the anxiety level and the common misconceptions in algebra. Data analysis revealed that respondents had high anxiety in mathematics. Likewise, the common mathematical misconceptions of the Grade Seven students were: combining unlike terms; multiplying the base and exponents; regarding the variable x as 0; squaring the first and second terms only in product of two binomials; wrong meaning attached to brackets; writing the terms next to each other but not simplifying in using the FOIL Method; writing the literal coefficient even if the numerical coefficient is 0; and dividing the denominator by the numerator when the numerical coefficient in the numerator is smaller than the numerical coefficient of the denominator. Results of the study show that the socio-demographic characteristics were not related to mathematics anxiety and misconceptions. Furthermore, students from higher section had high anxiety than those students on the lower section. Thus, belonging to higher or lower section may affect the mathematical misconceptions of the respondents.

Keywords: algebra, grade 7 math, math anxiety, math misconceptions

Procedia PDF Downloads 411
26591 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO

Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky

Abstract:

The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.

Keywords: aeronautics, big data, data processing, machine learning, S1000D

Procedia PDF Downloads 157