Search results for: healthcare data
23943 Microarray Data Visualization and Preprocessing Using R and Bioconductor
Authors: Ruchi Yadav, Shivani Pandey, Prachi Srivastava
Abstract:
Microarrays provide a rich source of data on the molecular working of cells. Each microarray reports on the abundance of tens of thousands of mRNAs. Virtually every human disease is being studied using microarrays with the hope of finding the molecular mechanisms of disease. Bioinformatics analysis plays an important part of processing the information embedded in large-scale expression profiling studies and for laying the foundation for biological interpretation. A basic, yet challenging task in the analysis of microarray gene expression data is the identification of changes in gene expression that are associated with particular biological conditions. Careful statistical design and analysis are essential to improve the efficiency and reliability of microarray experiments throughout the data acquisition and analysis process. One of the most popular platforms for microarray analysis is Bioconductor, an open source and open development software project based on the R programming language. This paper describes specific procedures for conducting quality assessment, visualization and preprocessing of Affymetrix Gene Chip and also details the different bioconductor packages used to analyze affymetrix microarray data and describe the analysis and outcome of each plots.Keywords: microarray analysis, R language, affymetrix visualization, bioconductor
Procedia PDF Downloads 48023942 Social Assistive Robots, Reframing the Human Robotics Interaction Benchmark of Social Success
Authors: Antonio Espingardeiro
Abstract:
It is likely that robots will cross the boundaries of industry into households over the next decades. With demographic challenges worldwide, the future ageing populations will require the introduction of assistive technologies capable of providing, care, human dignity and quality of life through the aging process. Robotics technology has a high potential for being used in the areas of social and healthcare by promoting a wide range of activities such as entertainment, companionship, supervision or cognitive and physical assistance. However, such close Human Robotics Interactions (HRIs) encompass a rich set of ethical scenarios that need to be addressed before Socially Assistive Robots (SARs) reach the global markets. Such interactions with robots may seem a worthy goal for many technical/financial reasons but inevitably require close attention to the ethical dimensions of such interactions. This article investigates the current HRI benchmark of social success. It revises it according to the ethical principles of beneficence, non-maleficence and justice aligned with social care ethos. An extension of such benchmark is proposed based on an empirical study of HRIs with elderly groups.Keywords: HRI, SARs, social success, benchmark, elderly care
Procedia PDF Downloads 52323941 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution
Authors: Najrullah Khan, Athar Ali Khan
Abstract:
The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation
Procedia PDF Downloads 53523940 Machine Learning Application in Shovel Maintenance
Authors: Amir Taghizadeh Vahed, Adithya Thaduri
Abstract:
Shovels are the main components in the mining transportation system. The productivity of the mines depends on the availability of shovels due to its high capital and operating costs. The unplanned failure/shutdowns of a shovel results in higher repair costs, increase in downtime, as well as increasing indirect cost (i.e. loss of production and company’s reputation). In order to mitigate these failures, predictive maintenance can be useful approach using failure prediction. The modern mining machinery or shovels collect huge datasets automatically; it consists of reliability and maintenance data. However, the gathered datasets are useless until the information and knowledge of data are extracted. Machine learning as well as data mining, which has a major role in recent studies, has been used for the knowledge discovery process. In this study, data mining and machine learning approaches are implemented to detect not only anomalies but also patterns from a dataset and further detection of failures.Keywords: maintenance, machine learning, shovel, conditional based monitoring
Procedia PDF Downloads 22023939 Standard Languages for Creating a Database to Display Financial Statements on a Web Application
Authors: Vladimir Simovic, Matija Varga, Predrag Oreski
Abstract:
XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange
Procedia PDF Downloads 39423938 Analyze and Visualize Eye-Tracking Data
Authors: Aymen Sekhri, Emmanuel Kwabena Frimpong, Bolaji Mubarak Ayeyemi, Aleksi Hirvonen, Matias Hirvonen, Tedros Tesfay Andemichael
Abstract:
Fixation identification, which involves isolating and identifying fixations and saccades in eye-tracking protocols, is an important aspect of eye-movement data processing that can have a big impact on higher-level analyses. However, fixation identification techniques are frequently discussed informally and rarely compared in any meaningful way. With two state-of-the-art algorithms, we will implement fixation detection and analysis in this work. The velocity threshold fixation algorithm is the first algorithm, and it identifies fixation based on a threshold value. For eye movement detection, the second approach is U'n' Eye, a deep neural network algorithm. The goal of this project is to analyze and visualize eye-tracking data from an eye gaze dataset that has been provided. The data was collected in a scenario in which individuals were shown photos and asked whether or not they recognized them. The results of the two-fixation detection approach are contrasted and visualized in this paper.Keywords: human-computer interaction, eye-tracking, CNN, fixations, saccades
Procedia PDF Downloads 13523937 Privacy Rights of Children in the Social Media Sphere: The Benefits and Challenges Under the EU and US Legislative Framework
Authors: Anna Citterbergova
Abstract:
This study explores the safeguards and guarantees to children’s personal data protection under the current EU and US legislative framework, namely the GDPR (2018) and COPPA (2000). Considering that children are online for the majority of their free time, one cannot overlook the negative side effects that may be associated with online participation, which may put children’s wellbeing and their fundamental rights at risk. The question of whether the current relevant legislative framework in relation to the responsibilities of the internet service providers (ISPs) are adequate safeguards and guarantees to children’s personal data protection has been an evolving debate both in the US and in the EU. From a children’s rights perspective, processors of personal data have certain obligations that must meet the international human rights principles (e. g. the CRC, ECHR), which require taking into account the best interest of the child. Accordingly, the need to protect children’s privacy online remains strong and relevant with the expansion of the number and importance of social media platforms to human life. At the same time, the landscape of the internet is rapidly evolving, and commercial interests are taking a more targeted approach in seeking children’s data. Therefore, it is essential to constantly evaluate the ongoing and evolving newly adopted market policies of ISPs that may misuse the gap in the current letter of the law. Previous studies in the field have already pointed out that both GDPR and COPPA may theoretically not be sufficient in protecting children’s personal data. With the focus on social media platforms, this study uses the doctrinal-descriptive method to identifiy the mechanisms enshrined in the GDPR and COPPA designed to protect children’s personal data. In its second part, the study includes a data gathering phase by the national data protection authorities responsible for monitoring and supervision of the GDPR in relation to children’s personal data protection who monitor the enforcement of the data protection rules throughout the European Union an contribute to their consistent application. These gathered primary source of data will later be used to outline the series of benefits and challenges to children’s persona lata protection faced by these institutes and the analysis that aims to suggest if and/or how to hold ISPs accountable while striking a fair balance between the commercial rights and the right to protection of the personal data of children. The preliminary results can be divided into two categories. First, conclusions in the doctrinal-descriptive part of the study. Second, specific cases and situations from the practice of national data protection authorities. While for the first part, concrete conclusions can already be presented, the second part is currently still in the data gathering phase. The result of this research is a comprehensive analysis on the safeguards and guarantees to children’s personal data protection under the current EU and US legislative framework, based on doctrinal-descriptive approach and original empirical data.Keywords: personal data of children, personal data protection, GDPR, COPPA, ISPs, social media
Procedia PDF Downloads 9623936 Modelling the Education Supply Chain with Network Data Envelopment Analysis
Authors: Sourour Ramzi, Claudia Sarrico
Abstract:
Little has been done on network DEA in education, and nobody has attempted to model the whole education supply chain using network DEA. As such the contribution of the present paper is to propose a model for measuring the efficiency of education supply chains using network DEA. First, we use a general survey of data envelopment analysis (DEA) to establish the emergent themes for research in DEA, and focus on the theme of Network DEA. Second, we use a survey on two-stage DEA models, and Network DEA to write a state of the art on Network DEA, particularly applied to supply chain management. Third, we use a survey on DEA applications to establish the most influential papers on DEA education applications, in order to establish the state of the art on applications of DEA in education, in general, and applications of DEA to education using network DEA, in particular. Finally, we propose a model for measuring the performance of education supply chains of different education systems (countries or states within a country, for instance). We then use this model on some empirical data.Keywords: supply chain, education, data envelopment analysis, network DEA
Procedia PDF Downloads 36823935 Elderly Home Care the Need of an Hour In India
Authors: Varsha Reddy Jayar
Abstract:
Background: Our elderly family members deserve our best care. It's our responsibility to ensure they're healthy and safe. The population of India is increasing rapidly. People are literally being born in the streets, and there is a high growth on taxes and healthcare costs. Indian families are challenged with taking care of everyone. When you have elderly parents and a demanding job, it can be difficult to take care of them. You might not have enough time to care for them when you're already working or dealing with emotional difficulties. Living alone in old age can cause older individuals to face many health risks. Many seniors find living and caring for themselves challenging when they live by themselves. This study explored the factors that affect whether or not elderly people choose to live in old age homes. Methods: This study was carried out on 123 elderly people living in different old age homes in Karnataka, India. The reason for their residence at the home was explored using an interview. Results: It was found that the most common reason for living in an old age home is due to abuse from children and grandchildren; the majority reported were Daughter in law issues in the family specific to the adjustment and understanding amongst them. Conclusion: More and more elderly people in India are choosing to stay in old age homes as they get older. The government and voluntary agencies must have some sort of arrangements for institutional support.Keywords: old age home, elderly, Aging, challenges of aging
Procedia PDF Downloads 28123934 Knowledge Management (KM) Practices: A Study of KM Adoption among Doctors in Kuwait
Authors: B. Alajmi, L. Marouf, A. S. Chaudhry
Abstract:
In recent years, increasing emphasis has been placed upon issues concerning the evaluation of health care. In this regard, knowledge management has also been considered an important component of the evaluation process. KM facilitates the transfer of existing knowledge or the development of new knowledge among healthcare staff and patients. This research aimed to examine how hospitals in Kuwait employ knowledge management practices, including capturing, sharing, and generating, and the perceived impact of KM practices on performance of hospitals in Kuwait. Through adopting a quantitative survey method with 277 sample of doctors, the study found that in terms of the three major knowledge management practices – knowledge capturing, sharing, and generating – the adoption of KM practices were rated very low in the sampled hospitals in Kuwait. Hospitals paid little attention to the main activities that support the transfer of expertise among doctors in hospitals. However, as predicted by previous studies, knowledge management practices were perceived to have an impact on hospitals’ performance. Through knowledge capturing, sharing, and generating, hospitals could improve the services they provide through documenting best practices, transforming their hospitals into learning organizations in which lessons learned are captured, stored, and made available for others to learn from.Keywords: knowledge management, hospitals, knowledge management practices, knowledge management tools, performance
Procedia PDF Downloads 50323933 Secure Transmission Scheme in Device-to-Device Multicast Communications
Authors: Bangwon Seo
Abstract:
In this paper, we consider multicast device-to-device (D2D) direct communication systems in cellular networks. In multicast D2D communications, nearby mobile devices exchanges, their data directly without going through a base station and a D2D transmitter send its data to multiple D2D receivers that compose of D2D multicast group. We consider wiretap channel where there is an eavesdropper that attempts to overhear the transmitted data of the D2D transmitter. In this paper, we propose a secure transmission scheme in D2D multicast communications in cellular networks. In order to prevent the eavesdropper from overhearing the transmitted data of the D2D transmitter, a precoding vector is employed at the D2D transmitter in the proposed scheme. We perform computer simulations to evaluate the performance of the proposed scheme. Through the simulation, we show that the secrecy rate performance can be improved by selecting an appropriate precoding vector.Keywords: device-to-device communications, wiretap channel, secure transmission, precoding
Procedia PDF Downloads 29123932 Online Shopping vs Privacy – Results of an Experimental Study
Authors: Andrzej Poszewiecki
Abstract:
The presented paper contributes to the experimental current of research on privacy. The question of privacy is being discussed at length at present, primarily among lawyers and politicians. However, the matter of privacy has been of interest for economists for some time as well. The valuation of privacy by people is of great importance now. This article is about how people valuate their privacy. An experimental method has been utilised in the conducted research – the survey was carried out among customers of an online store, and the studied issue was whether their readiness to sell their data (WTA) was different from the willingness to buy data back (WTP). The basic aim of this article is to analyse whether people shopping on the Internet differentiate their privacy depending on whether they protect or sell it. The achieved results indicate the presence of major differences in this respect, which do not always come up with the original expectations. The obtained results have supported the hypothesis that people are more willing to sell their data than to repurchase them. However, the hypothesis that the value of proposed remuneration affects the willingness to sell/buy back personal data (one’s privacy) has not been supported.Keywords: privacy, experimental economics, behavioural economics, internet
Procedia PDF Downloads 29323931 Prediction of Anticancer Potential of Curcumin Nanoparticles by Means of Quasi-Qsar Analysis Using Monte Carlo Method
Authors: Ruchika Goyal, Ashwani Kumar, Sandeep Jain
Abstract:
The experimental data for anticancer potential of curcumin nanoparticles was calculated by means of eclectic data. The optimal descriptors were examined using Monte Carlo method based CORAL SEA software. The statistical quality of the model is following: n = 14, R² = 0.6809, Q² = 0.5943, s = 0.175, MAE = 0.114, F = 26 (sub-training set), n =5, R²= 0.9529, Q² = 0.7982, s = 0.086, MAE = 0.068, F = 61, Av Rm² = 0.7601, ∆R²m = 0.0840, k = 0.9856 and kk = 1.0146 (test set) and n = 5, R² = 0.6075 (validation set). This data can be used to build predictive QSAR models for anticancer activity.Keywords: anticancer potential, curcumin, model, nanoparticles, optimal descriptors, QSAR
Procedia PDF Downloads 31823930 Static vs. Stream Mining Trajectories Similarity Measures
Authors: Musaab Riyadh, Norwati Mustapha, Dina Riyadh
Abstract:
Trajectory similarity can be defined as the cost of transforming one trajectory into another based on certain similarity method. It is the core of numerous mining tasks such as clustering, classification, and indexing. Various approaches have been suggested to measure similarity based on the geometric and dynamic properties of trajectory, the overlapping between trajectory segments, and the confined area between entire trajectories. In this article, an evaluation of these approaches has been done based on computational cost, usage memory, accuracy, and the amount of data which is needed in advance to determine its suitability to stream mining applications. The evaluation results show that the stream mining applications support similarity methods which have low computational cost and memory, single scan on data, and free of mathematical complexity due to the high-speed generation of data.Keywords: global distance measure, local distance measure, semantic trajectory, spatial dimension, stream data mining
Procedia PDF Downloads 39623929 Chronic Hepatitis C Virus Screening: The Role, Strategies and Challenging of Primary Healthcare Faced to Augment and Identify Asymptomatic Infected Patients
Authors: Tarek K. Jalouta, Jolietta R. Holliman, Kathryn R. Burke, Kathleen M. Bewley-Thomas
Abstract:
Background: Chronic hepatitis C virus (HCV) infection is one of the leading causes of liver cirrhosis and hepatocellular carcinoma. In the United States, HCV screening awareness, treatment, and linkage to care are under continues ascending progress. However, still millions of people are asymptomatically infected and undiagnosed yet. Through this community mission, we sought to identify the best and the newest strategies to identify those infected people to educate them, link them to care and cure them. Methods: We have identified patients that did not have a prior HCV screening in our Electronic medical record (EMR) including all our different hospital locations (South Suburban Chicago, Northern, Western and Central Indiana). Providing education to all Primary care/Gastroenterology/Infectious diseases providers and staff in the clinic to increase awareness of the HCV screening. Health-related quality of life, chronic clinical complications, and demographics data were collected for each patient. All outcomes of HCV antibody-reactive and HCV RNA–positive results were identified and statistically analyzed. Results: From July 2016 to July 2018 we screened 35,720 individuals of birth cohort in our different Franciscan’s health medical centers. Of the screened population, 986 (2.7%) individuals were HCV AB-reactive. Of those, 319 (1%) patients were HCV RNA-positive, and 264 patients were counseled and linked to providers. 34 patients initiated anti-HCV therapy with successful treatment. Conclusions: Our HCV screening augmentation project considered the largest screening program in the Midwest. Augmenting the HCV screening process through creating a Best Practice Alert (BPA) in the EMR (Epic Sys.) and point of care testing could be helpful. Although continued work is required, our team is working on increase screening through adding HCV test to CBC-Panels in Emergency Department settings, phone calls to all birth cohort individuals through Robo-Calling System aimed to reach 75,000 individuals by 2019. However, a better linkage to care and referral monitoring system to all HCV RNA positive patients is still needed, and access to therapy, especially for uninsured patients, is challenging.Keywords: chronic hepatitis C, chronic hepatitis C treatment, chronic hepatitis C screening, chronic hepatitis C prevention, liver cancer
Procedia PDF Downloads 12523928 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 25323927 Data and Spatial Analysis for Economy and Education of 28 E.U. Member-States for 2014
Authors: Alexiou Dimitra, Fragkaki Maria
Abstract:
The objective of the paper is the study of geographic, economic and educational variables and their contribution to determine the position of each member-state among the EU-28 countries based on the values of seven variables as given by Eurostat. The Data Analysis methods of Multiple Factorial Correspondence Analysis (MFCA) Principal Component Analysis and Factor Analysis have been used. The cross tabulation tables of data consist of the values of seven variables for the 28 countries for 2014. The data are manipulated using the CHIC Analysis V 1.1 software package. The results of this program using MFCA and Ascending Hierarchical Classification are given in arithmetic and graphical form. For comparison reasons with the same data the Factor procedure of Statistical package IBM SPSS 20 has been used. The numerical and graphical results presented with tables and graphs, demonstrate the agreement between the two methods. The most important result is the study of the relation between the 28 countries and the position of each country in groups or clouds, which are formed according to the values of the corresponding variables.Keywords: Multiple Factorial Correspondence Analysis, Principal Component Analysis, Factor Analysis, E.U.-28 countries, Statistical package IBM SPSS 20, CHIC Analysis V 1.1 Software, Eurostat.eu Statistics
Procedia PDF Downloads 51223926 Evaluating the Factors That Influence Caries Reduction During Pregnancy
Authors: Mimoza Canga, Irene Malagnino, Vergjini Mulo, Alketa Qafmolla, Vito Antonio Malagnino
Abstract:
Background: Dental caries is the most common dental disease and pregnancy represents a special process of physical, hormonal and metabolic changes in pregnant women, which is accompanied by an imbalance in the oral cavity. Objective: The objective of this study is to evaluate caries reduction after dental visits, the scaling of teeth, fluoridated water, brushing of the teeth and using fluoride toothpaste before and during pregnancy. Materials and methods: This study was conducted in the time period March 2018- September 2021, the age range of the participants was: 18-41 years old. The sample taken under observation was composed of 84 pregnant women. The questionnaire included the demographic characteristics of the sample, such as age, women's education level was primary, secondary, and higher education. Based on women's education level, our analysis found that 25.9% of pregnant women had completed primary education, 35.2% of them had secondary education and 38.9% of pregnant women had higher education. The descriptive and analytical research analysis is formulated as a longitudinal study. Statistical analysis was performed using IBM SPSS Statistics 23.0. The significance level (α) was set at 0.05, whereas P-value and analysis of variance (ANOVA) were used to analyze the data. Results: In the present study, it was observed that there is a strong relationship between dental visits and the scaling of the teeth with the value of P˂ .0001. While the number of teeth with caries before pregnancy and fluoridated water have a P-value=0.002. If we compare the same factor with the number of teeth with dental caries during pregnancy, the correlation is P-value = 0.0001. The number of teeth with caries before pregnancy and carbohydrates consumption has a strong relation with P-value=0.05. According to the present research, the number of teeth with dental caries before pregnancy in relation to brushing the teeth has a P-value ˂ 0.05. Furthermore, in the actual research, it was established that using fluoride toothpaste doesn’t affect the number of teeth with caries before pregnancy with a P-value= .314. Conclusion: According to the results of the present study performed in Albania, it was found out that the periodical dental visits, scaling of the teeth, fluoridated water, brushing of the teeth influenced caries reduction before and during pregnancy. In comparison, the usage of fluoride toothpaste did not have any effect on dental caries reduction in the same time period. The recommendations are as follows: maintaining oral hygiene, using fluoridated water and brushing the teeth regularly. Healthcare providers should inform pregnant women about the importance of oral health and the implementation of measures to manage dental caries.Keywords: brushing of the teeth, dental visits, dental scaling, fluoridated water, pregnancy
Procedia PDF Downloads 19423925 Development of a Spatial Data for Renal Registry in Nigeria Health Sector
Authors: Adekunle Kolawole Ojo, Idowu Peter Adebayo, Egwuche Sylvester O.
Abstract:
Chronic Kidney Disease (CKD) is a significant cause of morbidity and mortality across developed and developing nations and is associated with increased risk. There are no existing electronic means of capturing and monitoring CKD in Nigeria. The work is aimed at developing a spatial data model that can be used to implement renal registries required for tracking and monitoring the spatial distribution of renal diseases by public health officers and patients. In this study, we have developed a spatial data model for a functional renal registry.Keywords: renal registry, health informatics, chronic kidney disease, interface
Procedia PDF Downloads 21423924 Interactive Virtual Patient Simulation Enhances Pharmacology Education and Clinical Practice
Authors: Lyndsee Baumann-Birkbeck, Sohil A. Khan, Shailendra Anoopkumar-Dukie, Gary D. Grant
Abstract:
Technology-enhanced education tools are being rapidly integrated into health programs globally. These tools provide an interactive platform for students and can be used to deliver topics in various modes including games and simulations. Simulations are of particular interest to healthcare education, where they are employed to enhance clinical knowledge and help to bridge the gap between theory and practice. Simulations will often assess competencies for practical tasks, yet limited research examines the effects of simulation on student perceptions of their learning. The aim of this study was to determine the effects of an interactive virtual patient simulation for pharmacology education and clinical practice on student knowledge, skills and confidence. Ethics approval for the study was obtained from Griffith University Research Ethics Committee (PHM/11/14/HREC). The simulation was intended to replicate the pharmacy environment and patient interaction. The content was designed to enhance knowledge of proton-pump inhibitor pharmacology, role in therapeutics and safe supply to patients. The tool was deployed into a third-year clinical pharmacology and therapeutics course. A number of core practice areas were examined including the competency domains of questioning, counselling, referral and product provision. Baseline measures of student self-reported knowledge, skills and confidence were taken prior to the simulation using a specifically designed questionnaire. A more extensive questionnaire was deployed following the virtual patient simulation, which also included measures of student engagement with the activity. A quiz assessing student factual and conceptual knowledge of proton-pump inhibitor pharmacology and related counselling information was also included in both questionnaires. Sixty-one students (response rate >95%) from two cohorts (2014 and 2015) participated in the study. Chi-square analyses were performed and data analysed using Fishers exact test. Results demonstrate that student knowledge, skills and confidence within the competency domains of questioning, counselling, referral and product provision, show improvement following the implementation of the virtual patient simulation. Statistically significant (p<0.05) improvement occurred in ten of the possible twelve self-reported measurement areas. Greatest magnitude of improvement occurred in the area of counselling (student confidence p<0.0001). Student confidence in all domains (questioning, counselling, referral and product provision) showed a marked increase. Student performance in the quiz also improved, demonstrating a 10% improvement overall for pharmacology knowledge and clinical practice following the simulation. Overall, 85% of students reported the simulation to be engaging and 93% of students felt the virtual patient simulation enhanced learning. The data suggests that the interactive virtual patient simulation developed for clinical pharmacology and therapeutics education enhanced students knowledge, skill and confidence, with respect to the competency domains of questioning, counselling, referral and product provision. These self-reported measures appear to translate to learning outcomes, as demonstrated by the improved student performance in the quiz assessment item. Future research of education using virtual simulation should seek to incorporate modern quantitative measures of student learning and engagement, such as eye tracking.Keywords: clinical simulation, education, pharmacology, simulation, virtual learning
Procedia PDF Downloads 33823923 Embodied Empowerment: A Design Framework for Augmenting Human Agency in Assistive Technologies
Authors: Melina Kopke, Jelle Van Dijk
Abstract:
Persons with cognitive disabilities, such as Autism Spectrum Disorder (ASD) are often dependent on some form of professional support. Recent transformations in Dutch healthcare have spurred institutions to apply new, empowering methods and tools to enable their clients to cope (more) independently in daily life. Assistive Technologies (ATs) seem promising as empowering tools. While ATs can, functionally speaking, help people to perform certain activities without human assistance, we hold that, from a design-theoretical perspective, such technologies often fail to empower in a deeper sense. Most technologies serve either to prescribe or to monitor users’ actions, which in some sense objectifies them, rather than strengthening their agency. This paper proposes that theories of embodied interaction could help formulating a design vision in which interactive assistive devices augment, rather than replace, human agency and thereby add to a persons’ empowerment in daily life settings. It aims to close the gap between empowerment theory and the opportunities provided by assistive technologies, by showing how embodiment and empowerment theory can be applied in practice in the design of new, interactive assistive devices. Taking a Research-through-Design approach, we conducted a case study of designing to support independently living people with ASD with structuring daily activities. In three iterations we interlaced design action, active involvement and prototype evaluations with future end-users and healthcare professionals, and theoretical reflection. Our co-design sessions revealed the issue of handling daily activities being multidimensional. Not having the ability to self-manage one’s daily life has immense consequences on one’s self-image, and also has major effects on the relationship with professional caregivers. Over the course of the project relevant theoretical principles of both embodiment and empowerment theory together with user-insights, informed our design decisions. This resulted in a system of wireless light units that users can program as a reminder for tasks, but also to record and reflect on their actions. The iterative process helped to gradually refine and reframe our growing understanding of what it concretely means for a technology to empower a person in daily life. Drawing on the case study insights we propose a set of concrete design principles that together form what we call the embodied empowerment design framework. The framework includes four main principles: Enabling ‘reflection-in-action’; making information ‘publicly available’ in order to enable co-reflection and social coupling; enabling the implementation of shared reflections into an ‘endurable-external feedback loop’ embedded in the persons familiar ’lifeworld’; and nudging situated actions with self-created action-affordances. In essence, the framework aims for the self-development of a suitable routine, or ‘situated practice’, by building on a growing shared insight of what works for the person. The framework, we propose, may serve as a starting point for AT designers to create truly empowering interactive products. In a set of follow-up projects involving the participation of persons with ASD, Intellectual Disabilities, Dementia and Acquired Brain Injury, the framework will be applied, evaluated and further refined.Keywords: assistive technology, design, embodiment, empowerment
Procedia PDF Downloads 27823922 Environmental Evaluation of Two Kind of Drug Production (Syrup and Pomade Form) Using Life Cycle Assessment Methodology
Authors: H. Aksas, S. Boughrara, K. Louhab
Abstract:
The goal of this study was the use of life cycle assessment (LCA) methodology to assess the environmental impact of pharmaceutical product (four kinds of syrup form and tree kinds of pomade form), which are produced in one leader manufactory in Algeria town that is SAIDAL Company. The impacts generated have evaluated using SimpaPro7.1 with CML92 Method for syrup form and EPD 2007 for pomade form. All impacts evaluated have compared between them, with determination of the compound contributing to each impacts in each case. Data needed to conduct Life Cycle Inventory (LCI) came from this factory, by the collection of theoretical data near the responsible technicians and engineers of the company, the practical data are resulting from the assay of pharmaceutical liquid, obtained at the laboratories of the university. This data represent different raw material imported from European and Asian country necessarily to formulate the drug. Energy used is coming from Algerian resource for the input. Outputs are the result of effluent analysis of this factory with different form (liquid, solid and gas form). All this data (input and output) represent the ecobalance.Keywords: pharmaceutical product, drug residues, LCA methodology, environmental impacts
Procedia PDF Downloads 24623921 Multi Cloud Storage Systems for Resource Constrained Mobile Devices: Comparison and Analysis
Authors: Rajeev Kumar Bedi, Jaswinder Singh, Sunil Kumar Gupta
Abstract:
Cloud storage is a model of online data storage where data is stored in virtualized pool of servers hosted by third parties (CSPs) and located in different geographical locations. Cloud storage revolutionized the way how users access their data online anywhere, anytime and using any device as a tablet, mobile, laptop, etc. A lot of issues as vendor lock-in, frequent service outage, data loss and performance related issues exist in single cloud storage systems. So to evade these issues, the concept of multi cloud storage introduced. There are a lot of multi cloud storage systems exists in the market for mobile devices. In this article, we are providing comparison of four multi cloud storage systems for mobile devices Otixo, Unclouded, Cloud Fuze, and Clouds and evaluate their performance on the basis of CPU usage, battery consumption, time consumption and data usage parameters on three mobile phones Nexus 5, Moto G and Nexus 7 tablet and using Wi-Fi network. Finally, open research challenges and future scope are discussed.Keywords: cloud storage, multi cloud storage, vendor lock-in, mobile devices, mobile cloud computing
Procedia PDF Downloads 40823920 The Relationship between Emotional Intelligence and Leadership Performance
Authors: Omar Al Ali
Abstract:
The current study was aimed to explore the relationships between emotional intelligence, cognitive ability, and leader's performance. Data were collected from 260 senior managers from UAE. The results showed that there are significant relationships between emotional intelligence and leadership performance as measured by the annual internal evaluations of each participant (r = .42, p < .01). Data from regression analysis revealed that both variables namely emotional intelligence (beta = .31, p < .01), and cognitive ability (beta = .29, p < .01), predicted leadership competencies, and together explained 26% of its variance. Data suggests that EI and cognitive ability are significantly correlated with leadership performance. In depth implications of the present findings for human resource development theory and practice are discussed.Keywords: emotional intelligence, cognitive ability, leadership, performance
Procedia PDF Downloads 47723919 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System
Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva
Abstract:
Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.Keywords: energy production, meteorological data, irradiance decomposition, solar photovoltaic system
Procedia PDF Downloads 14223918 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining
Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri
Abstract:
In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.Keywords: educational data mining, Facebook, learning styles, personality traits
Procedia PDF Downloads 23123917 Talent-to-Vec: Using Network Graphs to Validate Models with Data Sparsity
Authors: Shaan Khosla, Jon Krohn
Abstract:
In a recruiting context, machine learning models are valuable for recommendations: to predict the best candidates for a vacancy, to match the best vacancies for a candidate, and compile a set of similar candidates for any given candidate. While useful to create these models, validating their accuracy in a recommendation context is difficult due to a sparsity of data. In this report, we use network graph data to generate useful representations for candidates and vacancies. We use candidates and vacancies as network nodes and designate a bi-directional link between them based on the candidate interviewing for the vacancy. After using node2vec, the embeddings are used to construct a validation dataset with a ranked order, which will help validate new recommender systems.Keywords: AI, machine learning, NLP, recruiting
Procedia PDF Downloads 8423916 A Web Service-Based Framework for Mining E-Learning Data
Authors: Felermino D. M. A. Ali, S. C. Ng
Abstract:
E-learning is an evolutionary form of distance learning and has become better over time as new technologies emerged. Today, efforts are still being made to embrace E-learning systems with emerging technologies in order to make them better. Among these advancements, Educational Data Mining (EDM) is one that is gaining a huge and increasing popularity due to its wide application for improving the teaching-learning process in online practices. However, even though EDM promises to bring many benefits to educational industry in general and E-learning environments in particular, its principal drawback is the lack of easy to use tools. The current EDM tools usually require users to have some additional technical expertise to effectively perform EDM tasks. Thus, in response to these limitations, this study intends to design and implement an EDM application framework which aims at automating and simplify the development of EDM in E-learning environment. The application framework introduces a Service-Oriented Architecture (SOA) that hides the complexity of technical details and enables users to perform EDM in an automated fashion. The framework was designed based on abstraction, extensibility, and interoperability principles. The framework implementation was made up of three major modules. The first module provides an abstraction for data gathering, which was done by extending Moodle LMS (Learning Management System) source code. The second module provides data mining methods and techniques as services; it was done by converting Weka API into a set of Web services. The third module acts as an intermediary between the first two modules, it contains a user-friendly interface that allows dynamically locating data provider services, and running knowledge discovery tasks on data mining services. An experiment was conducted to evaluate the overhead of the proposed framework through a combination of simulation and implementation. The experiments have shown that the overhead introduced by the SOA mechanism is relatively small, therefore, it has been concluded that a service-oriented architecture can be effectively used to facilitate educational data mining in E-learning environments.Keywords: educational data mining, e-learning, distributed data mining, moodle, service-oriented architecture, Weka
Procedia PDF Downloads 23623915 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0
Authors: Harris Niavis, Dimitra Politaki
Abstract:
The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.Keywords: blockchain, data quality, industry4.0, product quality
Procedia PDF Downloads 18923914 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction
Authors: Qais M. Yousef, Yasmeen A. Alshaer
Abstract:
Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization
Procedia PDF Downloads 175