Search results for: cardio data analysis
38575 Personal Data Protection: A Legal Framework for Health Law in Turkey
Authors: Veli Durmus, Mert Uydaci
Abstract:
Every patient who needs to get a medical treatment should share health-related personal data with healthcare providers. Therefore, personal health data plays an important role to make health decisions and identify health threats during every encounter between a patient and caregivers. In other words, health data can be defined as privacy and sensitive information which is protected by various health laws and regulations. In many cases, the data are an outcome of the confidential relationship between patients and their healthcare providers. Globally, almost all nations have own laws, regulations or rules in order to protect personal data. There is a variety of instruments that allow authorities to use the health data or to set the barriers data sharing across international borders. For instance, Directive 95/46/EC of the European Union (EU) (also known as EU Data Protection Directive) establishes harmonized rules in European borders. In addition, the General Data Protection Regulation (GDPR) will set further common principles in 2018. Because of close policy relationship with EU, this study provides not only information on regulations, directives but also how they play a role during the legislative process in Turkey. Even if the decision is controversial, the Board has recently stated that private or public healthcare institutions are responsible for the patient call system, for doctors to call people waiting outside a consultation room, to prevent unlawful processing of personal data and unlawful access to personal data during the treatment. In Turkey, vast majority private and public health organizations provide a service that ensures personal data (i.e. patient’s name and ID number) to call the patient. According to the Board’s decision, hospital or other healthcare institutions are obliged to take all necessary administrative precautions and provide technical support to protect patient privacy. However, this application does not effectively and efficiently performing in most health services. For this reason, it is important to draw a legal framework of personal health data by stating what is the main purpose of this regulation and how to deal with complicated issues on personal health data in Turkey. The research is descriptive on data protection law for health care setting in Turkey. Primary as well as secondary data has been used for the study. The primary data includes the information collected under current national and international regulations or law. Secondary data include publications, books, journals, empirical legal studies. Consequently, privacy and data protection regimes in health law show there are some obligations, principles and procedures which shall be binding upon natural or legal persons who process health-related personal data. A comparative approach presents there are significant differences in some EU member states due to different legal competencies, policies, and cultural factors. This selected study provides theoretical and practitioner implications by highlighting the need to illustrate the relationship between privacy and confidentiality in Personal Data Protection in Health Law. Furthermore, this paper would help to define the legal framework for the health law case studies on data protection and privacy.Keywords: data protection, personal data, privacy, healthcare, health law
Procedia PDF Downloads 22438574 Formalizing a Procedure for Generating Uncertain Resource Availability Assumptions Based on Real Time Logistic Data Capturing with Auto-ID Systems for Reactive Scheduling
Authors: Lars Laußat, Manfred Helmus, Kamil Szczesny, Markus König
Abstract:
As one result of the project “Reactive Construction Project Scheduling using Real Time Construction Logistic Data and Simulation”, a procedure for using data about uncertain resource availability assumptions in reactive scheduling processes has been developed. Prediction data about resource availability is generated in a formalized way using real-time monitoring data e.g. from auto-ID systems on the construction site and in the supply chains. The paper focuses on the formalization of the procedure for monitoring construction logistic processes, for the detection of disturbance and for generating of new and uncertain scheduling assumptions for the reactive resource constrained simulation procedure that is and will be further described in other papers.Keywords: auto-ID, construction logistic, fuzzy, monitoring, RFID, scheduling
Procedia PDF Downloads 51438573 Public Policy as a Component of Entrepreneurship Ecosystems: Challenges of Implementation
Authors: José Batista de Souza Neto
Abstract:
This research project has as its theme the implementation of public policies to support micro and small businesses (MSEs). The research problem defined was how public policies for access to markets that drive the entrepreneurial ecosystem of MSEs are implemented. The general objective of this research is to understand the process of implementing a public policy to support the entrepreneurial ecosystem of MSEs by the Support Service for Micro and Small Enterprises of the State of São Paulo (SEBRAESP). Public policies are constituent elements of entrepreneurship ecosystems that influence the creation and development of ventures from the action of the entrepreneur. At the end of the research, it is expected to achieve the results for the following specific objectives: (a) understand how the entrepreneurial ecosystem of MSEs is constituted; (b) understand how market access public policies for MSEs are designed and implemented; (c) understand SEBRAE's role in the entrepreneurship ecosystem; and (d) offer an action plan and monitor its execution up to march, 2023. The field research will be conducted based on Action Research, with a qualitative and longitudinal approach to the data. Data collection will be based on narratives produced since 2019 when the decision to implement Comércio Brasil program, a public policy focused on generating market access for 4280 MSEs yearly, was made. The narratives will be analyzed by the method of document analysis and narrative analysis. It is expected that the research will consolidate the relevance of public policies to market access for MSEs and the role of SEBRAE as a protagonist in the implementation of these public policies in the entrepreneurship ecosystem will be demonstrated. Action research is recognized as an intervention method, it is expected that this research will corroborate its role in supporting management processes.Keywords: entrepreneurship, entrepreneurship ecosystem, public policies, SEBRAE, action research
Procedia PDF Downloads 18738572 Frequency of Alloimmunization in Sickle Cell Disease Patients in Africa: A Systematic Review with Meta-analysis
Authors: Theresa Ukamaka Nwagha, Angela Ogechukwu Ugwu, Martins Nweke
Abstract:
Background and Objectives: Blood transfusion is an effective and proven treatment for some severe complications of sickle cell disease. Recurrent transfusions have put patients with sickle cell disease at risk of developing antibodies against the various antigens they were exposed to. This study aims to investigate the frequency of red blood cell alloimmunization in patients with sickle disease in Africa. Materials and Methods: This is a systematic review of peer-reviewed literature published in English. The review was conducted consistent with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist. Data sources for the review include MEDLINE, PubMed, CINAHL, and Academic Search Complete. Included in this review are articles that reported the frequency/prevalence of red blood cell alloimmunization in sickle cell disease patients in Africa. Eligible studies were subjected to independent full-text screening and data extraction. Risk of bias assessment was conducted with the aid of the mixed method appraisal tool. We employed a random-effects model of meta-analysis to estimate the pooled prevalence. We computed Cochrane’s Q statistics and I2 and prediction interval to quantify heterogeneity in effect size. Results: The prevalence estimates range from 2.6% to 29%. Pooled prevalence was estimated to be 10.4% (CI 7.7.–13.8); PI = 3.0 – 34.0%), with significant heterogeneity (I2 = 84.62; PI = 2.0-32.0%) and publication bias (Egger’s t-test = 1.744, p = 0.0965). Conclusion: The frequency of red cell alloantibody varies considerably in Africa. The alloantibodies appeared frequent in this order: the Rhesus, Kell, Lewis, Duffy, MNS, and LutheranKeywords: frequency, red blood cell, alloimmunization, sickle cell disease, Africa
Procedia PDF Downloads 10038571 Enhancing Academic Achievement of University Student through Stress Management Training: A Study from Southern Punjab, Pakistan
Authors: Rizwana Amin, Afshan Afroze Bhatti
Abstract:
The study was a quasi-experimental pre-post test design including two groups. Data was collected from 127 students through non-probability random sampling from Bahaudin Zakariya University Multan. The groups were given pre-test using perceived stress scale and information about academic achievement was taken by self-report. After screening, 27 participants didn’t meet the criterion. Remaining 100 participants were divided into two groups (experimental and control). Further, 4 students of experimental group denied taking intervention. Then 46 understudies were separated into three subgroups (16, 15 and 15 in each) for training. The experimental groups were given the stress management training, each of experimental group attended one 3-hour training sessions separately while the control group was only given pre-post assessment. The data were analyzed using ANCOVA method (analysis of covariance) t–test. Results of the study indicate that stress training will lead to increased emotional intelligence and academic achievement of students.Keywords: stress, stress management, academic achievement, students
Procedia PDF Downloads 34038570 Authorship Patterns in the Literature on English and Literary Studies of Bayero University, Kano: 2007 – 2017
Authors: Murtala Musa
Abstract:
The purpose of this study was to look at the authorship patterns of Master's Degree Dissertations submitted to the Department of English and Literary Studies at Bayero University in Kano between 2007 and 2017, with the goal of determining the pattern and degree of collaboration between authors. The study was conducted utilizing quantitative research methods and an Ex-post factor research design. A total of 176 copies of Masters Dissertations were examined, yielding a total of 12061 citations. The data collection instrument was a citation analysis checklist created by the researcher. Subramanyam's Law of Collaboration of Authors was used to determine the degree of collaboration among authors using descriptive statistics such as tables, frequency distributions, percentages, and charts. Single-authored publications, followed by double-authored articles, accounted for the majority of the contributions.Keywords: authorship patterns, bibliometrics, English and Literary studies, citation analysis
Procedia PDF Downloads 7638569 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 10538568 Elementary Education Outcome Efficiency in Indian States
Authors: Jyotsna Rosario, K. R. Shanmugam
Abstract:
Since elementary education is a merit good, considerable public resources are allocated to universalise it. However, elementary education outcomes vary across the Indian States. Evidences indicate that while some states are lagging in elementary education outcome primarily due to lack of resources and poor schooling infrastructure, others are lagging despite resource abundance and well-developed schooling infrastructure. Addressing the issue of efficiency, the study employs Stochastic Frontier Analysis for panel data of 27 Indian states from 2012-13 to 2017-18 to estimate the technical efficiency of State governments in generating enrolment. The mean efficiency of states was estimated to be 58%. Punjab, Meghalaya, and West Bengal were found to be the most efficient states. Whereas Jammu and Kashmir, Nagaland, Madhya Pradesh, and Odisha are one of the most inefficient states. This study emphasizes the efficient utilisation of public resources and helps in the identification of best practices.Keywords: technical efficiency, public expenditure, elementary education outcome, stochastic frontier analysis
Procedia PDF Downloads 18638567 The Effectiveness of Psychodrama on Anxiety Enhancement in Adolescent Boys
Authors: Saeed Dehnavi, Marjan Pooee
Abstract:
Background - Psychodrama, as a form of art therapy, helps people to enact and use role-plays for a specific problem, rather than just talking about it, in an effort to review the problem, gain feedback from group members, find appropriate solutions, and practice them for their life. This paper evaluated the effectiveness of psychodrama on enhancing anxiety of young adolescent boys. Methodology - This is aquasi-experimental research study, using a pre-post testing plan with control group. From four secondary schools in Kermanshah - Iran, 210 adolescent boys (aged 13 and 14 years) were asked to complete Koper Smith's self-esteem measure scale. Given the low self-esteem scores (less than the cut-off of 23), a number of 20 individuals were selected and randomly placed into two control and experimental groups. The experimental group participated in a twelve-session psychodrama therapy plan for 6 weeks, while the control group received no intervention. Data analysis was carried out by the analysis of covariance (ANCOVA). Results - The results of ANCOVA analysis showed an increase in the post-test scores for anxiety, and such increase was statistically significant. Conclusion - The findings indicated the effectiveness of psychodrama on anxiety enhancement of young boys. During psychodrama sessions, the adolescents learned to take the initiative, communicate with others in an excited state, and improve their anxiety with positive and constructive experiences.Keywords: anxiety, art therapy, psychodrama, young adolescents
Procedia PDF Downloads 54438566 Data Integrity: Challenges in Health Information Systems in South Africa
Authors: T. Thulare, M. Herselman, A. Botha
Abstract:
Poor system use, including inappropriate design of health information systems, causes difficulties in communication with patients and increased time spent by healthcare professionals in recording the necessary health information for medical records. System features like pop-up reminders, complex menus, and poor user interfaces can make medical records far more time consuming than paper cards as well as affect decision-making processes. Although errors associated with health information and their real and likely effect on the quality of care and patient safety have been documented for many years, more research is needed to measure the occurrence of these errors and determine the causes to implement solutions. Therefore, the purpose of this paper is to identify data integrity challenges in hospital information systems through a scoping review and based on the results provide recommendations on how to manage these. Only 34 papers were found to be most suitable out of 297 publications initially identified in the field. The results indicated that human and computerized systems are the most common challenges associated with data integrity and factors such as policy, environment, health workforce, and lack of awareness attribute to these challenges but if measures are taken the data integrity challenges can be managed.Keywords: data integrity, data integrity challenges, hospital information systems, South Africa
Procedia PDF Downloads 18138565 An Improved Heat Transfer Prediction Model for Film Condensation inside a Tube with Interphacial Shear Effect
Authors: V. G. Rifert, V. V. Gorin, V. V. Sereda, V. V. Treputnev
Abstract:
The analysis of heat transfer design methods in condensing inside plain tubes under existing influence of shear stress is presented in this paper. The existing discrepancy in more than 30-50% between rating heat transfer coefficients and experimental data has been noted. The analysis of existing theoretical and semi-empirical methods of heat transfer prediction is given. The influence of a precise definition concerning boundaries of phase flow (it is especially important in condensing inside horizontal tubes), shear stress (friction coefficient) and heat flux on design of heat transfer is shown. The substantiation of boundary conditions of the values of parameters, influencing accuracy of rated relationships, is given. More correct relationships for heat transfer prediction, which showed good convergence with experiments made by different authors, are substantiated in this work.Keywords: film condensation, heat transfer, plain tube, shear stress
Procedia PDF Downloads 24538564 Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network
Authors: Shoujia Fang, Guoqing Ding, Xin Chen
Abstract:
The quality of press-fit assembly is closely related to reliability and safety of product. The paper proposed a keypoint detection method based on convolutional neural network to improve the accuracy of keypoint detection in press-fit curve. It would provide an auxiliary basis for judging quality of press-fit assembly. The press-fit curve is a curve of press-fit force and displacement. Both force data and distance data are time-series data. Therefore, one-dimensional convolutional neural network is used to process the press-fit curve. After the obtained press-fit data is filtered, the multi-layer one-dimensional convolutional neural network is used to perform the automatic learning of press-fit curve features, and then sent to the multi-layer perceptron to finally output keypoint of the curve. We used the data of press-fit assembly equipment in the actual production process to train CNN model, and we used different data from the same equipment to evaluate the performance of detection. Compared with the existing research result, the performance of detection was significantly improved. This method can provide a reliable basis for the judgment of press-fit quality.Keywords: keypoint detection, curve feature, convolutional neural network, press-fit assembly
Procedia PDF Downloads 23038563 Municipal Solid Waste (MSW) Composition and Generation in Nablus City, Palestine
Authors: Issam A. Al-Khatib
Abstract:
In order to achieve a significant reduction of waste amount flowing into landfills, it is important to first understand the composition of the solid municipal waste generated. Hence a detailed analysis of municipal solid waste composition has been conducted in Nablus city. The aim is to provide data on the potential recyclable fractions in the actual waste stream, with a focus on the plastic fraction. Hence, waste-sorting campaigns were conducted on mixed waste containers from five districts in Nablus city. The districts vary in terms of infrastructure and average income. The target is to obtain representative data about the potential quantity and quality of household plastic waste. The study has measured the composition of municipal solid waste collected/ transported by Nablus municipality. The analysis was done by categorizing the samples into eight primary fractions (organic and food waste, paper and cardboard, glass, metals, textiles, plastic, a fine fraction (<10 mm), and others). The study results reveal that the MSW stream in Nablus city has a significant bio- and organic waste fraction (about 68% of the total MSW). The second largest fraction is paper and cardboard (13.6%), followed by plastics (10.1%), textiles (3.2%), glass (1.9%), metals (1.8%), a fine fraction (0.5%), and other waste (0.3%). After this complete and detailed characterization of MSW collected in Nablus and taking into account the content of biodegradable organic matter, the composting could be a solution for the city of Nablus where the surrounding areas of Nablus city have agricultural activities and could be a natural outlet to the compost product. Different waste management options could be practiced in the future in addition to composting, such as energy recovery and recycling, which result in a greater possibility of reducing substantial amounts that are disposed of at landfills.Keywords: developing countries, composition, management, recyclable, waste.
Procedia PDF Downloads 9038562 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 9538561 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India
Authors: Sayantan Khanra, Rojers P. Joseph
Abstract:
Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions
Procedia PDF Downloads 26838560 Iron Deficiency and Iron Deficiency Anaemia/Anaemia as a Diagnostic Indicator for Coeliac Disease: A Systematic Review With Meta-Analysis
Authors: Sahar Shams
Abstract:
Coeliac disease (CD) is a widely reported disease particularly in countries with predominant Caucasian populations. It presents with many signs and symptoms including iron deficiency (ID) and iron deficiency anaemia/anaemia (IDA/A). The exact association between ID, IDA/A and CD and how accurate these signs are in diagnosing CD is not fully known. This systematic review was conducted to investigate the accuracy of both ID & IDA/A as a diagnostic indicator for CD and whether it warrants point of care testing. A systematic review was performed looking at studies published in MEDLINE, Embase, Cochrane Library, and Web of Science. QUADAS-2 tool was used to assess risk of bias in each study. ROC curve and forest plots were generated as part of the meta-analysis after data extraction. 16 studies were identified in total, 13 of which were IDA/A studies and 3 ID studies. The prevalence of CD regardless of diagnostic indicator was assumed as 1%. The QUADAS-2 tool indicated most of studies as having high risk of bias. The PPV for CD was higher in those with ID than for those with IDA/A. Meta-analysis showed the overall odds of having CD is 5 times higher in individuals with ID & IDA/A. The ROC curve showed that there is definitely an association between both diagnostic indicators and CD, the association is not a particularly strong one due to great heterogeneity between studies. Whilst an association between IDA/A & ID and coeliac disease was evident, the results were not deemed significant enough to prompt coeliac disease testing in those with IDA/A & ID.Keywords: anemia, iron deficiency anemia, coeliac disease, point of care testing
Procedia PDF Downloads 13138559 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 29038558 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 10638557 Comparative Study of Dynamic Effect on Analysis Approaches for Circular Tanks Using Codal Provisions
Authors: P. Deepak Kumar, Aishwarya Alok, P. R. Maiti
Abstract:
Liquid storage tanks have become widespread during the recent decades due to their extensive usage. Analysis of liquid containing tanks is known to be complex due to hydrodynamic force exerted on tank which makes the analysis a complex one. The objective of this research is to carry out analysis of liquid domain along with structural interaction for various geometries of circular tanks considering seismic effects. An attempt has been made to determine hydrodynamic pressure distribution on the tank wall considering impulsive and convective components of liquid mass. To get a better picture, a comparative study of Draft IS 1893 Part 2, ACI 350.3 and Eurocode 8 for Circular Shaped Tank has been performed. Further, the differences in the magnitude of shear and moment at base as obtained from static (IS 3370 IV) and dynamic (Draft IS 1892 Part 2) analysis of ground supported circular tank highlight the need for us to mature from the old code to a newer code, which is more accurate and reliable.Keywords: liquid filled containers, circular tanks, IS 1893 (part 2), seismic analysis, sloshing
Procedia PDF Downloads 35338556 Immobilization of Lipase Enzyme by Low Cost Material: A Statistical Approach
Authors: Md. Z. Alam, Devi R. Asih, Md. N. Salleh
Abstract:
Immobilization of lipase enzyme produced from palm oil mill effluent (POME) by the activated carbon (AC) among the low cost support materials was optimized. The results indicated that immobilization of 94% was achieved by AC as the most suitable support material. A sequential optimization strategy based on a statistical experimental design, including one-factor-at-a-time (OFAT) method was used to determine the equilibrium time. Three components influencing lipase immobilization were optimized by the response surface methodology (RSM) based on the face-centered central composite design (FCCCD). On the statistical analysis of the results, the optimum enzyme concentration loading, agitation rate and carbon active dosage were found to be 30 U/ml, 300 rpm and 8 g/L respectively, with a maximum immobilization activity of 3732.9 U/g-AC after 2 hrs of immobilization. Analysis of variance (ANOVA) showed a high regression coefficient (R2) of 0.999, which indicated a satisfactory fit of the model with the experimental data. The parameters were statistically significant at p<0.05.Keywords: activated carbon, POME based lipase, immobilization, adsorption
Procedia PDF Downloads 24338555 A Gap Analysis of Attitude Towards Sustainable Sportswear Product Development between Consumers and Suppliers
Authors: Y. N. Fung, R. Liu, T. M. Choi
Abstract:
Over the past decades, previous studies have explored different consumers’ attitudes towards sustainable fashion and how these attitudes affect consumer behaviors. Researchers have attempted to provide solutions for product suppliers (e.g., retailers, designers, developers, and manufacturers) through studying consumers’ attitudes towards sustainable fashion. However, based on the studies of consumer attitudes, investigations on the sales and market share of sustainable sportswear products remain under-explored. Gaps may exist between the consumers’ expectations and the developed sustainable sportswear products. In this study, a novel study has been carried out to examine the attitude gaps existing between the sustainable sportswear suppliers’ (SSSs) and the sustainable sportswear consumers (SSCs). This study firstly identifies the key attitudes towards sustainable sportswear product development. It analyses how sustainable attitudes affect the products being developed, as well as the effects of the attitude’s difference between the SSSs and the SSCs on the consumers’ satisfaction towards sportswear product consumption. A gap analysis research framework is adopted with the use of collected questionnaire survey data. The results indicate that a significant difference exists between SSSs and SSCs’ attitudes towards sustainable design, manufacture, product features, and branding. Based on in-depth interviews, the major causes of the difference in attitudes are studied to provide managerial insights for sustainable sportswear product management and business development.Keywords: sustainability, sportswear, attitude, gap analysis, suppliers, consumers
Procedia PDF Downloads 11438554 GIS for Simulating Air Traffic by Applying Different Multi-radar Positioning Techniques
Authors: Amara Rafik, Bougherara Maamar, Belhadj Aissa Mostefa
Abstract:
Radar data is one of the many data sources used by ATM Air Traffic Management systems. These data come from air navigation radar antennas. These radars intercept signals emitted by the various aircraft crossing the controlled airspace and calculate the position of these aircraft and retransmit their positions to the Air Traffic Management System. For greater reliability, these radars are positioned in such a way as to allow their coverage areas to overlap. An aircraft will therefore be detected by at least one of these radars. However, the position coordinates of the same aircraft and sent by these different radars are not necessarily identical. Therefore, the ATM system must calculate a single position (radar track) which will ultimately be sent to the control position and displayed on the air traffic controller's monitor. There are several techniques for calculating the radar track. Furthermore, the geographical nature of the problem requires the use of a Geographic Information System (GIS), i.e. a geographical database on the one hand and geographical processing. The objective of this work is to propose a GIS for traffic simulation which reconstructs the evolution over time of aircraft positions from a multi-source radar data set and by applying these different techniques.Keywords: ATM, GIS, radar data, air traffic simulation
Procedia PDF Downloads 8638553 Analysis of the Effects of Vibrations on Tractor Drivers by Measurements With Wearable Sensors
Authors: Gubiani Rino, Nicola Zucchiatti, Da Broi Ugo, Bietresato Marco
Abstract:
The problem of vibrations in agriculture is very important due to the different types of machinery used for the different types of soil in which work is carried out. One of the most commonly used machines is the tractor, where the phenomenon has been studied for a long time by measuring the whole body and placing the sensor on the seat. However, this measurement system does not take into account the characteristics of the drivers, such as their body index (BMI), their gender (male, female) or the muscle fatigue they are subjected to, which is highly dependent on their age for example. The aim of the research was therefore to place sensors not only on the seat but along the spinal column to check the transmission of vibration on drivers with different BMI on different tractors and at different travel speeds and of different genders. The test was also done using wearable sensors such as a dynamometer applied to the muscles, the data of which was correlated with the vibrations produced by the tractor. Initial data show that even on new tractors with pneumatic seats, the vibrations attenuate little and are still correlated with the roughness of the track travelled and the forward speed. Another important piece of data are the root-mean square values referred to 8 hours (A(8)x,y,z) and the maximum transient vibration values (MTVVx,y,z) and, the latter, the MTVVz values were problematic (limiting factor in most cases) and always aggravated by the speed. The MTVVx values can be lowered by having a tyre-pressure adjustment system, able to properly adjust the tire pressure according to the specific situation (ground, speed) in which a tractor is operating.Keywords: fatigue, effect vibration on health, tractor driver vibrations, vibration, muscle skeleton disorders
Procedia PDF Downloads 7138552 Identification of Significant Genes in Rheumatoid Arthritis, Melanoma Metastasis, Ulcerative Colitis and Crohn’s Disease
Authors: Krishna Pal Singh, Shailendra Kumar Gupta, Olaf Wolkenhauer
Abstract:
Background: Our study aimed to identify common genes and potential targets across the four diseases, which include rheumatoid arthritis, melanoma metastasis, ulcerative colitis, and Crohn’s disease. We used a network and systems biology approach to identify the hub gene, which can act as a potential target for all four disease conditions. The regulatory network was extracted from the PPI using the MCODE module present in Cytoscape. Our objective was to investigate the significance of hub genes in these diseases using gene ontology and KEGG pathway enrichment analysis. Methods: Our methodology involved collecting disease gene-related information from DisGeNET databases and performing protein-protein interaction (PPI) network and core genes screening. We then conducted gene ontology and KEGG pathway enrichment analysis. Results: We found that IL6 plays a critical role in all disease conditions and in different pathways that can be associated with the development of all four diseases. Conclusions: The theoretical importance of our research is that we employed various systems and structural biology techniques to identify a crucial protein that could serve as a promising target for treating multiple diseases. Our data collection and analysis procedures involved rigorous scrutiny, ensuring high-quality results. Our conclusion is that IL6 plays a significant role in all four diseases, and it can act as a potential target for treating them. Our findings may have important implications for the development of novel therapeutic interventions for these diseases.Keywords: melanoma metastasis, rheumatoid arthritis, inflammatory bowel diseases, integrated bioinformatics analysis
Procedia PDF Downloads 9038551 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe
Authors: Ahmad Haidar
Abstract:
Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market
Procedia PDF Downloads 7738550 Microfacies and Sedimentary Environment of Potentially Hydrocarbon-Bearing Ordovician and Silurian Deposits of Selected Boreholes in the Baltic Syneclise (NE Poland)
Authors: Katarzyna Sobczak
Abstract:
Over the last few years extensive research on the Lower Palaeozic of the Baltic region has been carried out, associated with growing interest in the unconventional hydrocarbon resources of the area. The present study contributes to this investigation by providing relevant microfacies analysis of Ordovician and Silurian carbonate and clastic deposits of the Polish part of the Baltic Syneclise, using data from the Kętrzyn IG-1, Henrykowo 1 and Babiak 1 boreholes. The analytical data, encompassing sedimentological, palaeontological, and petrographic indicators enables the interpretation of the sedimentary environments and their control factors. The main microfacies types distinguished within the studied interval are: bioclastic wackestone, bioclastic packstone, carbonate-rich mudstone, marlstone, nodular limestone and bituminous claystone. The Ordovician is represented by redeposited carbonate rocks formed in a relatively high-energy environment (middle shelf setting). The Upper Ordovician-Lower Silurian rocks of the studied basin represent sedimentary succession formed during a distinctive marine transgression. Considering the sedimentological and petrological data from the Silurian, a low-energy sedimentary environment (offshore setting) with intermittent high-energy events (tempestites) can be inferred for the sedimentary basin of NE Poland. Slow sedimentation of carbonate ooze and fine-grained siliciclastic rocks, formed under oxygen-deficient conditions of the seabed, favoured organic matter preservation. The presence of the storm beds suggests an episodic nature of seabed oxygenation. A significant part of the analysed depositional successions shows characteristics indicative of deposition from gravity flows, but lacks evidence of its turbidity origins. There is, however, evidence for storms acting as a mechanism of flow activation. The discussed Ordovician-Silurian transition of depositional environments in the Baltic area fits well to the global environmental changes encompassing the Upper Ordovician and the Lower Silurian.Keywords: Baltic Syneclise, microfacies analysis, Ordovician, Silurian, unconventional hydrocarbons
Procedia PDF Downloads 43338549 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 21938548 The Risk and Prevention of Peer-To-Peer Network Lending in China
Authors: Zhizhong Yuan, Lili Wang, Chenya Zheng, Wuqi Yang
Abstract:
How to encourage and support peer-to-peer (P2P) network lending, and effectively monitor the risk of P2P network lending, has become the focus of the Chinese government departments, industrialists, experts and scholars in recent years. The reason is that this convenient online micro-credit service brings a series of credit risks and other issues. Avoiding the risks brought by the P2P network lending model, it can better play a benign role and help China's small and medium-sized private enterprises with vigorous development to solve the capital needs; otherwise, it will bring confusion to the normal financial order. As a form of financial services, P2P network lending has injected new blood into China's non-government finance in the past ten years, and has found a way out for idle funds and made up for the shortage of traditional financial services in China. However, it lacks feasible measures in credit evaluation and government supervision. This paper collects a large amount of data about P2P network lending of China. The data collection comes from the official media of the Chinese government, the public achievements of existing researchers and the analysis and collation of correlation data by the authors. The research content of this paper includes literature review; the current situation of China's P2P network lending development; the risk analysis of P2P network lending in China; the risk prevention strategy of P2P network lending in China. The focus of this paper is to try to find a specific program to strengthen supervision and avoid risks from the perspective of government regulators, operators of P2P network lending platform, investors and users of funds. These main measures include: China needs to develop self-discipline organization of P2P network lending industry and formulate self-discipline norms as soon as possible; establish a regular information disclosure system of P2P network lending platform; establish censorship of credit rating of borrowers; rectify the P2P network lending platform in compliance through the implementation of bank deposition. The results and solutions will benefit all the P2P network lending platforms, creditors, debtors, bankers, independent auditors and government agencies of China and other countries.Keywords: peer-to-peer(P2P), regulation, risk prevention, supervision
Procedia PDF Downloads 16638547 Knowledge of Trauma-Informed Practice: A Mixed Methods Exploratory Study with Educators of Young Children
Authors: N. Khodarahmi, L. Ford
Abstract:
Decades of research on the impact of trauma in early childhood suggest severe risks to the mental health, emotional, social and physical development of a young child. Trauma-exposed students can pose a variety of different levels of challenges to schools and educators of young children and to date, few studies have addressed ECE teachers’ role in providing trauma support. The present study aims to contribute to this literature by exploring the beliefs of British Columbia’s (BC) early childhood education (ECE) teachers in their level of readiness and capability to work within a trauma-informed practice (TIP) framework to support their trauma-exposed students. Through a sequential, mix-methods approach, a self-report questionnaire and semi-structured interviews will be used to gauge BC ECE teachers’ knowledge of TIP, their preparedness, and their ability in using this framework to support their most vulnerable students. Teacher participants will be recruited through the ECEBC organization and various school districts in the Greater Vancouver Area. Questionnaire data will be primarily collected through an online survey tool whereas interviews will be taking place in-person and audio-recorded. Data analysis of survey responses will be largely descriptive, whereas interviews, once transcribed, will be employing thematic content analysis to generate themes from teacher responses. Ultimately, this study hopes to highlight the necessity of utilizing the TIP framework in BC ECE classrooms in order to support both trauma-exposed students and provide essential resources to compassionate educators of young children.Keywords: early childhood education, early learning classrooms, refugee students, trauma-exposed students, trauma-informed practice
Procedia PDF Downloads 14138546 Insulin Resistance in Children and Adolescents in Relation to Body Mass Index, Waist Circumference and Body Fat Weight
Authors: E. Vlachopapadopoulou, E. Dikaiakou, E. Anagnostou, I. Panagiotopoulos, E. Kaloumenou, M. Kafetzi, A. Fotinou, S. Michalacos
Abstract:
Aim: To investigate the relation and impact of Body Mass Index (BMI), Waist Circumference (WC) and Body Fat Weight (BFW) on insulin resistance (MATSUDA INDEX < 2.5) in children and adolescents. Methods: Data from 95 overweight and obese children (47 boys and 48 girls) with mean age 10.7 ± 2.2 years were analyzed. ROC analysis was used to investigate the predictive ability of BMI, WC and BFW for insulin resistance and find the optimal cut-offs. The overall performance of the ROC analysis was quantified by computing area under the curve (AUC). Results: ROC curve analysis indicated that the optimal-cut off of WC for the prediction of insulin resistance was 97 cm with sensitivity equal to 75% and specificity equal to 73.1%. AUC was 0.78 (95% CI: 0.63-0.92, p=0.001). The sensitivity and specificity of obesity for the discrimination of participants with insulin resistance from those without insulin resistance were equal to 58.3% and 75%, respectively (AUC=0.67). BFW had a borderline predictive ability for insulin resistance (AUC=0.58, 95% CI: 0.43-0.74, p=0.101). The predictive ability of WC was equivalent with the correspondence predictive ability of BMI (p=0.891). Obese subjects had 4.2 times greater odds for having insulin resistance (95% CI: 1.71-10.30, p < 0.001), while subjects with WC more than 97 had 8.1 times greater odds for having insulin resistance (95% CI: 2.14-30.86, p=0.002). Conclusion: BMI and WC are important clinical factors that have significant clinical relation with insulin resistance in children and adolescents. The cut off of 97 cm for WC can identify children with greater likelihood for insulin resistance.Keywords: body fat weight, body mass index, insulin resistance, obese children, waist circumference
Procedia PDF Downloads 320