Search results for: data recognition
25486 The Impact of an Educational Program on Knowledge, Attitude and Practices of Healthcare Professionals towards Family Presence during Resuscitation in an Emergency Department at a Tertiary Care Setting, in Karachi, Pakistan
Authors: Shaista Meghani, Rozina Karmaliani, Khairulnissa Ajani, Shireen Shahzad, Nadeem Ullah Khan
Abstract:
Background: The concept of Family Presence During Resuscitation (FPDR) is gradually gaining recognition in western countries, however, it is rarely considered in South Asian countries including Pakistan. Over time, patients’ and families’ rights have gained recognition and healthcare has progressed to become more patient-family centered. Objectives: The objective of this study was to evaluate the impact of an educational program on the Knowledge, Attitude, and Practices (KAP) of healthcare professionals (HCPs) towards FPDR in Emergency Department (ED), at a tertiary care setting, in Karachi, Pakistan. Methods: This was a Pre-test and Post-test study design. A convenient universal sampling was done, and all ED nurses and physicians with more than one year of experience were eligible. The intervention included one-hour training sessions for physicians (three sessions) and nurses (eight sessions), The KAP of nurses and physicians were assessed immediately after (post-test I), and two weeks(post-test II) after the intervention using a pretested questionnaire. Results: The findings of the study revealed that the mean scores of knowledge and attitude of HCPs at both time points were statistically significant (p-value=<0.001), however, an insignificant difference was found on practice of FPDR (p-value=>0.05). Conclusion: The study findings recommend that the educational program on FPDR for HCPs needs to be offered on an ongoing basis. Moreover, training modules need to be developed for the staff, and formal guidelines need to be proposed for FPDR, through a multidisciplinary team approach.Keywords: family presence, cardiopulmonary resuscitation, attitude, education, practices, health care professionals
Procedia PDF Downloads 18525485 Raising Forest Voices: A Cross-Country Comparative Study of Indigenous Peoples’ Engagement with Grassroots Climate Change Mitigation Projects in the Initial Pilot Phase of Community-Based Reducing Emissions from Deforestation and forest Degradation
Authors: Karl D. Humm
Abstract:
The United Nations’ Community-based REDD+ (Reducing Emissions from Deforestation and forest Degradation) (CBR+) is a programme that directly finances grassroots climate change mitigation strategies that uplift Indigenous Peoples (IPs) and other marginalised groups. A pilot for it in six countries was developed in response to criticism of the REDD+ programme for excluding IPs from dialogues about climate change mitigation strategies affecting their lands and livelihoods. Despite the pilot’s conclusion in 2017, no complete report has yet been produced on the results of CBR+. To fill this gap, this study investigated the experiences with involving IPs in the CBR+ programmes and local projects across all six pilot countries. A literature review of official UN reports and academic articles identified challenges and successes with IP participation in REDD+ which became the basis for a framework guiding data collection. A mixed methods approach was used to collect and analyse qualitative and quantitative data from CBR+ documents and written interviews with CBR+ National Coordinators in each country for a cross-country comparative analysis. The study found that the most frequent challenges were lack of organisational capacity, illegal forest activities, and historically-based contentious relationships in IP and forest-dependent communities. Successful programmes included IPs and incorporated respect and recognition of IPs as major stakeholders in managing sustainable forests. Findings are summarized and shared with a set of recommendations for improvement of future projects.Keywords: climate change, forests, indigenous peoples, REDD+
Procedia PDF Downloads 12325484 A Study on Big Data Analytics, Applications and Challenges
Authors: Chhavi Rana
Abstract:
The aim of the paper is to highlight the existing development in the field of big data analytics. Applications like bioinformatics, smart infrastructure projects, Healthcare, and business intelligence contain voluminous and incremental data, which is hard to organise and analyse and can be dealt with using the framework and model in this field of study. An organization's decision-making strategy can be enhanced using big data analytics and applying different machine learning techniques and statistical tools on such complex data sets that will consequently make better things for society. This paper reviews the current state of the art in this field of study as well as different application domains of big data analytics. It also elaborates on various frameworks in the process of Analysis using different machine-learning techniques. Finally, the paper concludes by stating different challenges and issues raised in existing research.Keywords: big data, big data analytics, machine learning, review
Procedia PDF Downloads 8125483 A Study on Big Data Analytics, Applications, and Challenges
Authors: Chhavi Rana
Abstract:
The aim of the paper is to highlight the existing development in the field of big data analytics. Applications like bioinformatics, smart infrastructure projects, healthcare, and business intelligence contain voluminous and incremental data which is hard to organise and analyse and can be dealt with using the framework and model in this field of study. An organisation decision-making strategy can be enhanced by using big data analytics and applying different machine learning techniques and statistical tools to such complex data sets that will consequently make better things for society. This paper reviews the current state of the art in this field of study as well as different application domains of big data analytics. It also elaborates various frameworks in the process of analysis using different machine learning techniques. Finally, the paper concludes by stating different challenges and issues raised in existing research.Keywords: big data, big data analytics, machine learning, review
Procedia PDF Downloads 9325482 Developing Rice Disease Analysis System on Mobile via iOS Operating System
Authors: Rujijan Vichivanives, Kittiya Poonsilp, Canasanan Wanavijit
Abstract:
This research aims to create mobile tools to analyze rice disease quickly and easily. The principle of object-oriented software engineering and objective-C language were used for software development methodology and the principle of decision tree technique was used for analysis method. Application users can select the features of rice disease or the color appears on the rice leaves for recognition analysis results on iOS mobile screen. After completing the software development, unit testing and integrating testing method were used to check for program validity. In addition, three plant experts and forty farmers have been assessed for usability and benefit of this system. The overall of users’ satisfaction was found in a good level, 57%. The plant experts give a comment on the addition of various disease symptoms in the database for more precise results of the analysis. For further research, it is suggested that image processing system should be developed as a tool that allows users search and analyze for rice diseases more convenient with great accuracy.Keywords: rice disease, data analysis system, mobile application, iOS operating system
Procedia PDF Downloads 28625481 Managing Linguistic Diversity in Teaching and in Learning in Higher Education Institutions: The Case of the University of Luxembourg
Authors: Argyro-Maria Skourmalla
Abstract:
Today’s reality is characterized by diversity in different levels and aspects of everyday life. Focusing on the aspect of language and communication in Higher Education (HE), the present paper draws on the example of the University of Luxembourg as a multilingual and international setting. The University of Luxembourg, which is located between France, Germany, and Belgium, adopted its new multilingualism policy in 2020, establishing English, French, German, and Luxembourgish as the official languages of the Institution. In addition, with around 10.000 students and staff coming from various countries around the world, linguistic diversity in this university is seen as both a resource and a challenge that calls for an inclusive and multilingual approach. The present paper includes data derived from semi-structured interviews with lecturing staff from different disciplines and an online survey with undergraduate students at the University of Luxembourg. Participants shared their experiences and point of view regarding linguistic diversity in this context. Findings show that linguistic diversity in this university is seen as an asset but comes with challenges, and even though there is progress in the use of multilingual practices, a lot needs to be done towards the recognition of staff and students’ linguistic repertoires for inclusion and education equity.Keywords: linguistic diversity, higher education, Luxembourg, multilingual practices, teaching, learning
Procedia PDF Downloads 7425480 Key Principles and Importance of Applied Geomorphological Maps for Engineering Structure Placement
Authors: Sahar Maleki, Reza Shahbazi, Nayere Sadat Bayat Ghiasi
Abstract:
Applied geomorphological maps are crucial tools in engineering, particularly for the placement of structures. These maps provide precise information about the terrain, including landforms, soil types, and geological features, which are essential for making informed decisions about construction sites. The importance of these maps is evident in risk assessment, as they help identify potential hazards such as landslides, erosion, and flooding, enabling better risk management. Additionally, these maps assist in selecting the most suitable locations for engineering projects. Cost efficiency is another significant benefit, as proper site selection and risk assessment can lead to substantial cost savings by avoiding unsuitable areas and minimizing the need for extensive ground modifications. Ensuring the maps are accurate and up-to-date is crucial for reliable decision-making. Detailed information about various geomorphological features is necessary to provide a comprehensive overview. Integrating geomorphological data with other environmental and engineering data to create a holistic view of the site is one of the most fundamental steps in engineering. In summary, the preparation of applied geomorphological maps is a vital step in the planning and execution of engineering projects, ensuring safety, efficiency, and sustainability. In the Geological Survey of Iran, the preparation of these applied maps has enabled the identification and recognition of areas prone to geological hazards such as landslides, subsidence, earthquakes, and more. Additionally, areas with problematic soils, potential groundwater zones, and safe construction sites are identified and made available to the public.Keywords: geomorphological maps, geohazards, risk assessment, decision-making
Procedia PDF Downloads 1725479 Framework for Integrating Big Data and Thick Data: Understanding Customers Better
Authors: Nikita Valluri, Vatcharaporn Esichaikul
Abstract:
With the popularity of data-driven decision making on the rise, this study focuses on providing an alternative outlook towards the process of decision-making. Combining quantitative and qualitative methods rooted in the social sciences, an integrated framework is presented with a focus on delivering a much more robust and efficient approach towards the concept of data-driven decision-making with respect to not only Big data but also 'Thick data', a new form of qualitative data. In support of this, an example from the retail sector has been illustrated where the framework is put into action to yield insights and leverage business intelligence. An interpretive approach to analyze findings from both kinds of quantitative and qualitative data has been used to glean insights. Using traditional Point-of-sale data as well as an understanding of customer psychographics and preferences, techniques of data mining along with qualitative methods (such as grounded theory, ethnomethodology, etc.) are applied. This study’s final goal is to establish the framework as a basis for providing a holistic solution encompassing both the Big and Thick aspects of any business need. The proposed framework is a modified enhancement in lieu of traditional data-driven decision-making approach, which is mainly dependent on quantitative data for decision-making.Keywords: big data, customer behavior, customer experience, data mining, qualitative methods, quantitative methods, thick data
Procedia PDF Downloads 16125478 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome
Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler
Abstract:
Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model
Procedia PDF Downloads 15225477 Behavioral and EEG Reactions in Native Turkic-Speaking Inhabitants of Siberia and Siberian Russians during Recognition of Syntactic Errors in Sentences in Native and Foreign Languages
Authors: Tatiana N. Astakhova, Alexander E. Saprygin, Tatyana A. Golovko, Alexander N. Savostyanov, Mikhail S. Vlasov, Natalia V. Borisova, Alexandera G. Karpova, Urana N. Kavai-ool, Elena D. Mokur-ool, Nikolay A. Kolchanov, Lubomir I. Aftanas
Abstract:
The aim of the study is to compare behaviorally and EEG reactions in Turkic-speaking inhabitants of Siberia (Tuvinians and Yakuts) and Russians during the recognition of syntax errors in native and foreign languages. 63 healthy aboriginals of the Tyva Republic, 29 inhabitants of the Sakha (Yakutia) Republic, and 55 Russians from Novosibirsk participated in the study. All participants completed a linguistic task, in which they had to find a syntax error in the written sentences. Russian participants completed the task in Russian and in English. Tuvinian and Yakut participants completed the task in Russian, English, and Tuvinian or Yakut, respectively. EEG’s were recorded during the solving of tasks. For Russian participants, EEG's were recorded using 128-channels. The electrodes were placed according to the extended International 10-10 system, and the signals were amplified using ‘Neuroscan (USA)’ amplifiers. For Tuvinians and Yakuts EEG's were recorded using 64-channels and amplifiers Brain Products, Germany. In all groups 0.3-100 Hz analog filtering, sampling rate 1000 Hz were used. Response speed and the accuracy of recognition error were used as parameters of behavioral reactions. Event-related potentials (ERP) responses P300 and P600 were used as indicators of brain activity. The accuracy of solving tasks and response speed in Russians were higher for Russian than for English. The P300 amplitudes in Russians were higher for English; the P600 amplitudes in the left temporal cortex were higher for the Russian language. Both Tuvinians and Yakuts have no difference in accuracy of solving tasks in Russian and in their respective national languages (Tuvinian and Yakut). However, the response speed was faster for tasks in Russian than for tasks in their national language. Tuvinians and Yakuts showed bad accuracy in English, but the response speed was higher for English than for Russian and the national languages. With Tuvinians, there were no differences in the P300 and P600 amplitudes and in cortical topology for Russian and Tuvinian, but there was a difference for English. In Yakuts, the P300 and P600 amplitudes and topology of ERP for Russian were the same as Russians had for Russian. In Yakuts, brain reactions during Yakut and English comprehension had no difference and were reflected foreign language comprehension -while the Russian language comprehension was reflected native language comprehension. We found out that the Tuvinians recognized both Russian and Tuvinian as native languages, and English as a foreign language. The Yakuts recognized both English and Yakut as a foreign language, only Russian as a native language. According to the inquirer, both Tuvinians and Yakuts use the national language as a spoken language, whereas they don’t use it for writing. It can well be a reason that Yakuts perceive the Yakut writing language as a foreign language while writing Russian as their native.Keywords: EEG, language comprehension, native and foreign languages, Siberian inhabitants
Procedia PDF Downloads 53125476 Incremental Learning of Independent Topic Analysis
Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda
Abstract:
In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.Keywords: text mining, topic extraction, independent, incremental, independent component analysis
Procedia PDF Downloads 30725475 Open Data for e-Governance: Case Study of Bangladesh
Authors: Sami Kabir, Sadek Hossain Khoka
Abstract:
Open Government Data (OGD) refers to all data produced by government which are accessible in reusable way by common people with access to Internet and at free of cost. In line with “Digital Bangladesh” vision of Bangladesh government, the concept of open data has been gaining momentum in the country. Opening all government data in digital and customizable format from single platform can enhance e-governance which will make government more transparent to the people. This paper presents a well-in-progress case study on OGD portal by Bangladesh Government in order to link decentralized data. The initiative is intended to facilitate e-service towards citizens through this one-stop web portal. The paper further discusses ways of collecting data in digital format from relevant agencies with a view to making it publicly available through this single point of access. Further, possible layout of this web portal is presented.Keywords: e-governance, one-stop web portal, open government data, reusable data, web of data
Procedia PDF Downloads 35425474 ARABEX: Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder and Custom Convolutional Recurrent Neural Network
Authors: Hozaifa Zaki, Ghada Soliman
Abstract:
In this paper, we introduced an approach for Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder (ARABEX) with bidirectional LSTM. This approach is used for translating the Arabic dot-matrix expiration dates into their corresponding filled-in dates. A custom lightweight Convolutional Recurrent Neural Network (CRNN) model is then employed to extract the expiration dates. Due to the lack of available dataset images for the Arabic dot-matrix expiration date, we generated synthetic images by creating an Arabic dot-matrix True Type Font (TTF) matrix to address this limitation. Our model was trained on a realistic synthetic dataset of 3287 images, covering the period from 2019 to 2027, represented in the format of yyyy/mm/dd. We then trained our custom CRNN model using the generated synthetic images to assess the performance of our model (ARABEX) by extracting expiration dates from the translated images. Our proposed approach achieved an accuracy of 99.4% on the test dataset of 658 images, while also achieving a Structural Similarity Index (SSIM) of 0.46 for image translation on our dataset. The ARABEX approach demonstrates its ability to be applied to various downstream learning tasks, including image translation and reconstruction. Moreover, this pipeline (ARABEX+CRNN) can be seamlessly integrated into automated sorting systems to extract expiry dates and sort products accordingly during the manufacturing stage. By eliminating the need for manual entry of expiration dates, which can be time-consuming and inefficient for merchants, our approach offers significant results in terms of efficiency and accuracy for Arabic dot-matrix expiration date recognition.Keywords: computer vision, deep learning, image processing, character recognition
Procedia PDF Downloads 8125473 Dwindling the Stability of DNA Sequence by Base Substitution at Intersection of COMT and MIR4761 Gene
Authors: Srishty Gulati, Anju Singh, Shrikant Kukreti
Abstract:
The manifestation of structural polymorphism in DNA depends on the sequence and surrounding environment. Ample of folded DNA structures have been found in the cellular system out of which DNA hairpins are very common, however, are indispensable due to their role in the replication initiation sites, recombination, transcription regulation, and protein recognition. We enumerate this approach in our study, where the two base substitutions and change in temperature embark destabilization of DNA structure and misbalance the equilibrium between two structures of a sequence present at the overlapping region of the human COMT gene and MIR4761 gene. COMT and MIR4761 gene encodes for catechol-O-methyltransferase (COMT) enzyme and microRNAs (miRNAs), respectively. Environmental changes and errors during cell division lead to genetic abnormalities. The COMT gene entailed in dopamine regulation fosters neurological diseases like Parkinson's disease, schizophrenia, velocardiofacial syndrome, etc. A 19-mer deoxyoligonucleotide sequence 5'-AGGACAAGGTGTGCATGCC-3' (COMT19) is located at exon-4 on chromosome 22 and band q11.2 at the intersection of COMT and MIR4761 gene. Bioinformatics studies suggest that this sequence is conserved in humans and few other organisms and is involved in recognition of transcription factors in the vicinity of 3'-end. Non-denaturating gel electrophoresis and CD spectroscopy of COMT sequences indicate the formation of hairpin type DNA structures. Temperature-dependent CD studies revealed an unusual shift in the slipped DNA-Hairpin DNA equilibrium with the change in temperature. Also, UV-thermal melting techniques suggest that the two base substitutions on the complementary strand of COMT19 did not affect the structure but reduces the stability of duplex. This study gives insight about the possibility of existing structurally polymorphic transient states within DNA segments present at the intersection of COMT and MIR4761 gene.Keywords: base-substitution, catechol-o-methyltransferase (COMT), hairpin-DNA, structural polymorphism
Procedia PDF Downloads 11925472 Migration Law in Republic of Panama
Authors: Ronel Solis, Leonardo Collado
Abstract:
Migration law in the Republic of Panama has been regulated mainly by the executive branch. This has created a crisis not only institutional but also social because the evolution of these norms has rested greatly from the discretion of the government in office. This has created instability in immigration regulation and more now, with the migration crisis of which Panama is also part. Different migration policies have been established. The most recent is that of the controlled migration flow, in which, for humanitarian reasons, migrants move from the border with Colombia to the border with Costa Rica. Unfortunately, such control is not enough, and in some cases, unprotected migrants have been confined for months, their passports have been withheld, and no recognition of their rights is offered. The Inter-American Court of Human Rights has condemned Panama for the unfair detention of an irregular migrant, who was detained for two years in Panamanian prisons, without having committed a crime and without accessing a just defense. This is the case Vélez Loor vs. the Republic of Panama. Uncontrollable migration has been putting pressure on Panamanian public health services. The recent denunciation of HIV-related NGOs that warns that there are hundreds of foreigners who receive expensive antiretroviral therapy in Panama is serious, and several of them are irregular migrants. On the other hand, there are no border control posts with the Republic of Colombia, because it is a jungle area and migrants are exposed to arms and drug trafficking, and unfortunately, also to prostitution. Government entities such as the border police service have provided humanitarian support to migrants on the border with Colombia, although it is not their administrative function, and various entities discuss who should address this crisis. However, few economic resources are allocated by the government to solve this problem, especially with the recent mass migration of Venezuelans who have fled their country. The establishment of a migratory normative code is necessary to establish uniformity in the recognition and application of migratory rights. In this way, dependence on the changing migration policies of the different Panamanian governments would be eliminated, and the rights of migrants and nationals would be guaranteed.Keywords: executive branch, irregular migration, migration code, Republic of Panama
Procedia PDF Downloads 12125471 Resource Framework Descriptors for Interestingness in Data
Authors: C. B. Abhilash, Kavi Mahesh
Abstract:
Human beings are the most advanced species on earth; it's all because of the ability to communicate and share information via human language. In today's world, a huge amount of data is available on the web in text format. This has also resulted in the generation of big data in structured and unstructured formats. In general, the data is in the textual form, which is highly unstructured. To get insights and actionable content from this data, we need to incorporate the concepts of text mining and natural language processing. In our study, we mainly focus on Interesting data through which interesting facts are generated for the knowledge base. The approach is to derive the analytics from the text via the application of natural language processing. Using semantic web Resource framework descriptors (RDF), we generate the triple from the given data and derive the interesting patterns. The methodology also illustrates data integration using the RDF for reliable, interesting patterns.Keywords: RDF, interestingness, knowledge base, semantic data
Procedia PDF Downloads 16225470 Data Mining Practices: Practical Studies on the Telecommunication Companies in Jordan
Authors: Dina Ahmad Alkhodary
Abstract:
This study aimed to investigate the practices of Data Mining on the telecommunication companies in Jordan, from the viewpoint of the respondents. In order to achieve the goal of the study, and test the validity of hypotheses, the researcher has designed a questionnaire to collect data from managers and staff members from main department in the researched companies. The results shows improvements stages of the telecommunications companies towered Data Mining.Keywords: data, mining, development, business
Procedia PDF Downloads 49425469 A Biologically Inspired Approach to Automatic Classification of Textile Fabric Prints Based On Both Texture and Colour Information
Authors: Babar Khan, Wang Zhijie
Abstract:
Machine Vision has been playing a significant role in Industrial Automation, to imitate the wide variety of human functions, providing improved safety, reduced labour cost, the elimination of human error and/or subjective judgments, and the creation of timely statistical product data. Despite the intensive research, there have not been any attempts to classify fabric prints based on printed texture and colour, most of the researches so far encompasses only black and white or grey scale images. We proposed a biologically inspired processing architecture to classify fabrics w.r.t. the fabric print texture and colour. We created a texture descriptor based on the HMAX model for machine vision, and incorporated colour descriptor based on opponent colour channels simulating the single opponent and double opponent neuronal function of the brain. We found that our algorithm not only outperformed the original HMAX algorithm on classification of fabric print texture and colour, but we also achieved a recognition accuracy of 85-100% on different colour and different texture fabric.Keywords: automatic classification, texture descriptor, colour descriptor, opponent colour channel
Procedia PDF Downloads 48225468 The Impact of System and Data Quality on Organizational Success in the Kingdom of Bahrain
Authors: Amal M. Alrayes
Abstract:
Data and system quality play a central role in organizational success, and the quality of any existing information system has a major influence on the effectiveness of overall system performance.Given the importance of system and data quality to an organization, it is relevant to highlight their importance on organizational performance in the Kingdom of Bahrain. This research aims to discover whether system quality and data quality are related, and to study the impact of system and data quality on organizational success. A theoretical model based on previous research is used to show the relationship between data and system quality, and organizational impact. We hypothesize, first, that system quality is positively associated with organizational impact, secondly that system quality is positively associated with data quality, and finally that data quality is positively associated with organizational impact. A questionnaire was conducted among public and private organizations in the Kingdom of Bahrain. The results show that there is a strong association between data and system quality, that affects organizational success.Keywords: data quality, performance, system quality, Kingdom of Bahrain
Procedia PDF Downloads 49225467 Cloud Computing in Data Mining: A Technical Survey
Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham
Abstract:
Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.Keywords: cloud computing, data mining, computing models, cloud services
Procedia PDF Downloads 47925466 Cross-border Data Transfers to and from South Africa
Authors: Amy Gooden, Meshandren Naidoo
Abstract:
Genetic research and transfers of big data are not confined to a particular jurisdiction, but there is a lack of clarity regarding the legal requirements for importing and exporting such data. Using direct-to-consumer genetic testing (DTC-GT) as an example, this research assesses the status of data sharing into and out of South Africa (SA). While SA laws cover the sending of genetic data out of SA, prohibiting such transfer unless a legal ground exists, the position where genetic data comes into the country depends on the laws of the country from where it is sent – making the legal position less clear.Keywords: cross-border, data, genetic testing, law, regulation, research, sharing, South Africa
Procedia PDF Downloads 12425465 The Study of Security Techniques on Information System for Decision Making
Authors: Tejinder Singh
Abstract:
Information system is the flow of data from different levels to different directions for decision making and data operations in information system (IS). Data can be violated by different manner like manual or technical errors, data tampering or loss of integrity. Security system called firewall of IS is effected by such type of violations. The flow of data among various levels of Information System is done by networking system. The flow of data on network is in form of packets or frames. To protect these packets from unauthorized access, virus attacks, and to maintain the integrity level, network security is an important factor. To protect the data to get pirated, various security techniques are used. This paper represents the various security techniques and signifies different harmful attacks with the help of detailed data analysis. This paper will be beneficial for the organizations to make the system more secure, effective, and beneficial for future decisions making.Keywords: information systems, data integrity, TCP/IP network, vulnerability, decision, data
Procedia PDF Downloads 30625464 Data Integration with Geographic Information System Tools for Rural Environmental Monitoring
Authors: Tamas Jancso, Andrea Podor, Eva Nagyne Hajnal, Peter Udvardy, Gabor Nagy, Attila Varga, Meng Qingyan
Abstract:
The paper deals with the conditions and circumstances of integration of remotely sensed data for rural environmental monitoring purposes. The main task is to make decisions during the integration process when we have data sources with different resolution, location, spectral channels, and dimension. In order to have exact knowledge about the integration and data fusion possibilities, it is necessary to know the properties (metadata) that characterize the data. The paper explains the joining of these data sources using their attribute data through a sample project. The resulted product will be used for rural environmental analysis.Keywords: remote sensing, GIS, metadata, integration, environmental analysis
Procedia PDF Downloads 11825463 Emotion Oriented Students' Opinioned Topic Detection for Course Reviews in Massive Open Online Course
Authors: Zhi Liu, Xian Peng, Monika Domanska, Lingyun Kang, Sannyuya Liu
Abstract:
Massive Open education has become increasingly popular among worldwide learners. An increasing number of course reviews are being generated in Massive Open Online Course (MOOC) platform, which offers an interactive feedback channel for learners to express opinions and feelings in learning. These reviews typically contain subjective emotion and topic information towards the courses. However, it is time-consuming to artificially detect these opinions. In this paper, we propose an emotion-oriented topic detection model to automatically detect the students’ opinioned aspects in course reviews. The known overall emotion orientation and emotional words in each review are used to guide the joint probabilistic modeling of emotion and aspects in reviews. Through the experiment on real-life review data, it is verified that the distribution of course-emotion-aspect can be calculated to capture the most significant opinioned topics in each course unit. This proposed technique helps in conducting intelligent learning analytics for teachers to improve pedagogies and for developers to promote user experiences.Keywords: Massive Open Online Course (MOOC), course reviews, topic model, emotion recognition, topical aspects
Procedia PDF Downloads 26125462 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic
Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi
Abstract:
In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing
Procedia PDF Downloads 29925461 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data
Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin
Abstract:
Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.Keywords: big data, machine learning, ontology model, urban data model
Procedia PDF Downloads 41625460 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs
Authors: André Augusto Ceballos Melo
Abstract:
Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.Keywords: stable diffusion, neural interface, smart prosthetic, augmenting
Procedia PDF Downloads 9925459 Villar Settlement Farm School for the Aetas: Assimilation through American Colonial Education in Zambales, Philippines
Authors: Julian E. Abuso, Alberto T. Paala Jr.
Abstract:
The creation of settlement farm schools at the outset of American colonization of the Philippines was not a matter of accident; rather, their establishment was a major component of a grand plan on public education based on the benevolent assimilation policy of the United States. This argument is illustrated by the case of Villar Settlement Farm School, a school for the Aetas as a non-Christian tribal community in 1907. The study aims to: (1) identify and describe the antecedents for the establishment of Settlement Farm School, (2) explicate the cultural conflicts encountered by Aetas in school, (3) appraise the consequences of education as acculturation among Aeta population. The study made use of the following: historical data based on primary and secondary sources and life histories from primary informants. The Settlement Farm School for the Aetas was borne out of the American’s change in policy from military to civilian authority, recognition of education as a tool for benevolent assimilation. The narratives of informants manifested resistance to certain aspects of the educational process.Keywords: settlement farm school Aetas, tribe, colonial education, Aeta, non-Christian tribal community
Procedia PDF Downloads 31825458 Data-driven Decision-Making in Digital Entrepreneurship
Authors: Abeba Nigussie Turi, Xiangming Samuel Li
Abstract:
Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.Keywords: startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship
Procedia PDF Downloads 32625457 Factors Influencing the Development and Implementation of Radiology Technologist Specialist Role in Image Interpretation in Sudan
Authors: Awad Elkhadir, Rajab M. Ben Yousef
Abstract:
Introduction: The production of high-quality medical images by radiology technologists is useful in diagnosing and treating various injuries and diseases. However, the factors affecting the role of radiology technologists in image interpretation in Sudan have not been investigated widely. Methods: Cross-sectional study has been employed by recruiting ten radiology college deans in Sudan. The questionnaire was distributed online, and obtained data were analyzed using Microsoft Excel and IBM-SPSS version 16.0 to generate descriptive statistics. Results: The study results have shown that half of the deans were doubtful about the readiness of Sudan to implement the role of radiology technologist specialist in image interpretation. The majority of them (60%) believed that this issue had been most strongly pushed by researchers over the past decade. The factors affecting the implementation of the radiology technologist specialist role in image interpretation included; education/training (100%), recognition (30%), technical issues (30%), people-related issues (20%), management changes (30%), government role (30%), costs (10%), and timings (20%). Conclusion: The study concluded that there is a need for a change in image interpretation by radiology technologists in Sudan.Keywords: development, image interpretation, implementation, radiology technologist specialist, Sudan
Procedia PDF Downloads 87