Search results for: data envelopment analysis (DEA)
41824 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa
Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees
Abstract:
The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.Keywords: solar energy, solar radiation, ERA-5, potential energy
Procedia PDF Downloads 21341823 Language Errors Used in “The Space between Us” Movie and Their Effects on Translation Quality: Translation Study toward Discourse Analysis Approach
Authors: Mochamad Nuruz Zaman, Mangatur Rudolf Nababan, M. A. Djatmika
Abstract:
Both society and education areas teach to have good communication for building the interpersonal skills up. Everyone has the capacity to understand something new, either well comprehension or worst understanding. Worst understanding makes the language errors when the interactions are done by someone in the first meeting, and they do not know before it because of distance area. “The Space between Us” movie delivers the love-adventure story between Mars Boy and Earth Girl. They are so many missing conversations because of the different climate and environment. As the moviegoer also must be focused on the subtitle in order to enjoy well the movie. Furthermore, Indonesia subtitle and English conversation on the movie still have overlapping understanding in the translation. Translation hereby consists of source language -SL- (English conversation) and target language -TL- (Indonesia subtitle). These research gap above is formulated in research question by how the language errors happened in that movie and their effects on translation quality which is deepest analyzed by translation study toward discourse analysis approach. The research goal is to expand the language errors and their translation qualities in order to create a good atmosphere in movie media. The research is studied by embedded research in qualitative design. The research locations consist of setting, participant, and event as focused determined boundary. Sources of datum are “The Space between Us” movie and informant (translation quality rater). The sampling is criterion-based sampling (purposive sampling). Data collection techniques use content analysis and questioner. Data validation applies data source and method triangulation. Data analysis delivers domain, taxonomy, componential, and cultural theme analysis. Data findings on the language errors happened in the movie are referential, register, society, textual, receptive, expressive, individual, group, analogical, transfer, local, and global errors. Data discussions on their effects to translation quality are concentrated by translation techniques on their data findings; they are amplification, borrowing, description, discursive creation, established equivalent, generalization, literal, modulation, particularization, reduction, substitution, and transposition.Keywords: discourse analysis, language errors, The Space between Us movie, translation techniques, translation quality instruments
Procedia PDF Downloads 21941822 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data
Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis
Abstract:
Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction
Procedia PDF Downloads 59041821 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-
Authors: Nieto Bernal Wilson, Carmona Suarez Edgar
Abstract:
The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse
Procedia PDF Downloads 41241820 Government Big Data Ecosystem: A Systematic Literature Review
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review
Procedia PDF Downloads 23141819 A Machine Learning Decision Support Framework for Industrial Engineering Purposes
Authors: Anli Du Preez, James Bekker
Abstract:
Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.Keywords: Data analytics, Industrial engineering, Machine learning, Value creation
Procedia PDF Downloads 16841818 Traditional Chinese Medicine Treatment for Coronary Heart Disease: a Meta-Analysis
Abstract:
Traditional Chinese medicine has been used in the treatment of coronary heart disease (CHD) for centuries, and in recent years, the research data on the efficacy of traditional Chinese medicine through clinical trials has gradually increased to explore its real efficacy and internal pharmacology. However, due to the complexity of traditional Chinese medicine prescriptions, the efficacy of each component is difficult to clarify, and pharmacological research is challenging. This study aims to systematically review and clarify the clinical efficacy of traditional Chinese medicine in the treatment of coronary heart disease through a meta-analysis. Based on PubMed, CNKI database, Wanfang data, and other databases, eleven randomized controlled trials and 1091 CHD subjects were included. Two researchers conducted a systematic review of the papers and conducted a meta-analysis supporting the positive therapeutic effect of traditional Chinese medicine in the treatment of CHD.Keywords: coronary heart disease, Chinese medicine, treatment, meta-analysis
Procedia PDF Downloads 12541817 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.Keywords: rule induction, decision table, missing data, noise
Procedia PDF Downloads 39641816 Multivariate Assessment of Mathematics Test Scores of Students in Qatar
Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski
Abstract:
Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.Keywords: cluster analysis, education, mathematics, profiles
Procedia PDF Downloads 12741815 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: colour data, local stereo matching, stereo correspondence, disparity map
Procedia PDF Downloads 37141814 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design
Authors: Qing K. Zhu
Abstract:
Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise
Procedia PDF Downloads 25441813 Relationship between Driving under the Influence and Traffic Safety
Authors: Eun Hak Lee, Young-Hyun Seo, Hosuk Shin, Seung-Young Kho
Abstract:
Among traffic crashes, driving under the influence (DUI) of alcohol is the most dangerous behavior in Seoul, South Korea. In 2016 alone 40 deaths occurred on of 2,857 cases of DUI. Since DUI is one of the major factors in increasing the severity of crashes, the intensive management of DUI required to reduce traffic crash deaths and the crash damages. This study aims to investigate the relationship between DUI and traffic safety in order to establish countermeasures for traffic safety improvement. The analysis was conducted on the habitual drivers who drove under the influence. Information of habitual drivers is matched to crash data and fine data. The descriptive statistics on data used in this study, which consists of driver license acquisition, traffic fine, and crash data provided by the Korean National Police Agency, are described. The drivers under the influence are classified by statistically significant criteria, such as driver’s age, license type, driving experience, and crash reasons. With the results of the analysis, we propose some countermeasures to enhance traffic safety.Keywords: driving under influence, traffic safety, traffic crash, traffic fine
Procedia PDF Downloads 22241812 Measuring Development through Extreme Observations: An Archetypal Analysis Approach to Index Construction
Authors: Claudeline D. Cellan
Abstract:
Development is multifaceted, and efforts to hasten growth in all these facets have been gaining traction in recent years. Thus, producing a composite index that is reflective of these multidimensional impacts captures the interests of policymakers. The problem lies in going through a mixture of theoretical, methodological and empirical decisions and complexities which, when done carelessly, can lead to inconsistent and unreliable results. This study looks into index computation from a different and less complex perspective. Borrowing the idea of archetypes or ‘pure types’, archetypal analysis looks for points in the convex hull of the multivariate data set that captures as much information in the data as possible. The archetypes or 'pure types' are estimated such that they are convex combinations of all the observations, which in turn are convex combinations of the archetypes. This ensures that the archetypes are realistically observable, therefore achievable. In the sense of composite indices, we look for the best among these archetypes and use this as a benchmark for index computation. Its straightforward and simplistic approach does away with aggregation and substitutability problems which are commonly encountered in index computation. As an example of the application of archetypal analysis in index construction, the country data for the Human Development Index (HDI 2017) of the United Nations Development Programme (UNDP) is used. The goal of this exercise is not to replicate the result of the UNDP-computed HDI, but to illustrate the usability of archetypal analysis in index construction. Here best is defined in the context of life, education and gross national income sub-indices. Results show that the HDI from the archetypal analysis has a linear relationship with the UNDP-computed HDI.Keywords: archetypes, composite index, convex combination, development
Procedia PDF Downloads 12941811 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20641810 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 6541809 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City
Authors: Christian Kapuku, Seung-Young Kho
Abstract:
An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.Keywords: geographic information system (GIS), network construction, transportation database, open source data
Procedia PDF Downloads 16841808 Data Analysis to Uncover Terrorist Attacks Using Data Mining Techniques
Authors: Saima Nazir, Mustansar Ali Ghazanfar, Sanay Muhammad Umar Saeed, Muhammad Awais Azam, Saad Ali Alahmari
Abstract:
Terrorism is an important and challenging concern. The entire world is threatened by only few sophisticated terrorist groups and especially in Gulf Region and Pakistan, it has become extremely destructive phenomena in recent years. Predicting the pattern of attack type, attack group and target type is an intricate task. This study offers new insight on terrorist group’s attack type and its chosen target. This research paper proposes a framework for prediction of terrorist attacks using the historical data and making an association between terrorist group, their attack type and target. Analysis shows that the number of attacks per year will keep on increasing, and Al-Harmayan in Saudi Arabia, Al-Qai’da in Gulf Region and Tehreek-e-Taliban in Pakistan will remain responsible for many future terrorist attacks. Top main targets of each group will be private citizen & property, police, government and military sector under constant circumstances.Keywords: data mining, counter terrorism, machine learning, SVM
Procedia PDF Downloads 41041807 Value Chain Analysis and Enhancement Added Value in Palm Oil Supply Chain
Authors: Juliza Hidayati, Sawarni Hasibuan
Abstract:
PT. XYZ is a manufacturing company that produces Crude Palm Oil (CPO). The fierce competition in the global markets not only between companies but also a competition between supply chains. This research aims to analyze the supply chain and value chain of Crude Palm Oil (CPO) in the company. Data analysis method used is qualitative analysis and quantitative analysis. The qualitative analysis describes supply chain and value chain, while the quantitative analysis is used to find out value added and the establishment of the value chain. Based on the analysis, the value chain of crude palm oil (CPO) in the company consists of four main actors that are suppliers of raw materials, processing, distributor, and customer. The value chain analysis consists of two actors; those are palm oil plantation and palm oil processing plant. The palm oil plantation activities include nurseries, planting, plant maintenance, harvesting, and shipping. The palm oil processing plant activities include reception, sterilizing, thressing, pressing, and oil classification. The value added of palm oil plantations was 72.42% and the palm oil processing plant was 10.13%.Keywords: palm oil, value chain, value added, supply chain
Procedia PDF Downloads 37241806 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.Keywords: mathematical sciences, data analytics, advances, unveiling
Procedia PDF Downloads 9541805 Eye Tracking: Biometric Evaluations of Instructional Materials for Improved Learning
Authors: Janet Holland
Abstract:
Eye tracking is a great way to triangulate multiple data sources for deeper, more complete knowledge of how instructional materials are really being used and emotional connections made. Using sensor based biometrics provides a detailed local analysis in real time expanding our ability to collect science based data for a more comprehensive level of understanding, not previously possible, for teaching and learning. The knowledge gained will be used to make future improvements to instructional materials, tools, and interactions. The literature has been examined and a preliminary pilot test was implemented to develop a methodology for research in Instructional Design and Technology. Eye tracking now offers the addition of objective metrics obtained from eye tracking and other biometric data collection with analysis for a fresh perspective.Keywords: area of interest, eye tracking, biometrics, fixation, fixation count, fixation sequence, fixation time, gaze points, heat map, saccades, time to first fixation
Procedia PDF Downloads 13241804 Legal Regulation of Personal Information Data Transmission Risk Assessment: A Case Study of the EU’s DPIA
Authors: Cai Qianyi
Abstract:
In the midst of global digital revolution, the flow of data poses security threats that call China's existing legislative framework for protecting personal information into question. As a preliminary procedure for risk analysis and prevention, the risk assessment of personal data transmission lacks detailed guidelines for support. Existing provisions reveal unclear responsibilities for network operators and weakened rights for data subjects. Furthermore, the regulatory system's weak operability and a lack of industry self-regulation heighten data transmission hazards. This paper aims to compare the regulatory pathways for data information transmission risks between China and Europe from a legal framework and content perspective. It draws on the “Data Protection Impact Assessment Guidelines” to empower multiple stakeholders, including data processors, controllers, and subjects, while also defining obligations. In conclusion, this paper intends to solve China's digital security shortcomings by developing a more mature regulatory framework and industry self-regulation mechanisms, resulting in a win-win situation for personal data protection and the development of the digital economy.Keywords: personal information data transmission, risk assessment, DPIA, internet service provider, personal information data transimission, risk assessment
Procedia PDF Downloads 6241803 A Systematic Review and Meta-Analysis of Diabetes Ketoacidosis in Ethiopia
Authors: Addisu Tadesse Sahile, Mussie Wubshet Teka, Solomon Muluken Ayehu
Abstract:
Background: Diabetes is one of the common public health problems of the century that was estimated to affect one in a tenth of the world population by the year 2030, where diabetes ketoacidosis is one of its common acute complications. Objectives: The aim of this review was to assess the magnitude of diabetes ketoacidosis among patients with type 1 diabetes in Ethiopia. Methods: A systematic data search was done across Google Scholar, PubMed, Web of Science, and African Online Journals. Two reviewers carried out the selection, reviewing, screening, and extraction of the data independently by using a Microsoft Excel Spreadsheet. The Joanna Briggs Institute's prevalence critical appraisal tool was used to assess the quality of evidence. All studies conducted in Ethiopia that reported diabetes ketoacidosis rates among type 1 diabetes were included. The extracted data was imported into the comprehensive meta-analysis version 3.0 for further analysis. Heterogeneity was checked by Higgins’s method, whereas the publication bias was checked by using Beggs and Eggers’s tests. A random-effects meta-analysis model with a 95% confidence interval was computed to estimate the pooled prevalence. Furthermore, subgroup analysis based on the study area (Region) and the sample size was carried out. Result and Conclusion: After review made across a total of 51 articles, of which 12 articles fulfilled the inclusion criteria and were included in the meta-analysis. The pooled prevalence of diabetes ketoacidosis among type 1 diabetes in Ethiopia was 53.2% (95%CI: 43.1%-63.1%). The highest prevalence of DKA was reported in the Tigray region of Ethiopia, whereas the lowest was reported in the Southern region of Ethiopia. Concerned bodies were suggested to work on the escalated burden of diabetes ketoacidosis in Ethiopia.Keywords: DKA, Type 1 diabetes, Ethiopia, systematic review, meta-analysis
Procedia PDF Downloads 6141802 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 48541801 A Critical Analysis of Environmental Investment in India
Authors: K. Y. Chen, H. Chua, C. W. Kan
Abstract:
Environmental investment is an important issue in many countries. In this study, we will first review the environmental issues related to India and their effect on the economical development. Secondly, economic data would be collected from government yearly statistics. The statistics would also include the environmental investment information of India. Finally, we would co-relate the data in order to find out the relationship between environmental investment and sustainable development in India. Therefore, in the paper, we aim to analyse the effect of an environmental investment on the sustainable development in India. Based on the economic data collected, India is in development status with fast population and GDP growth speed. India is facing the environment problems due to its high-speed development. However, the environment investment could give a positive impact on the sustainable development in India. The environmental investment is keeping in the same growth rate with GDP. Acknowledgment: Authors would like to thank the financial support from the Hong Kong Polytechnic University for this work.Keywords: India, environmental investment, sustainable development, analysis
Procedia PDF Downloads 31741800 Research and Application of Multi-Scale Three Dimensional Plant Modeling
Authors: Weiliang Wen, Xinyu Guo, Ying Zhang, Jianjun Du, Boxiang Xiao
Abstract:
Reconstructing and analyzing three-dimensional (3D) models from situ measured data is important for a number of researches and applications in plant science, including plant phenotyping, functional-structural plant modeling (FSPM), plant germplasm resources protection, agricultural technology popularization. It has many scales like cell, tissue, organ, plant and canopy from micro to macroscopic. The techniques currently used for data capture, feature analysis, and 3D reconstruction are quite different of different scales. In this context, morphological data acquisition, 3D analysis and modeling of plants on different scales are introduced systematically. The commonly used data capture equipment for these multiscale is introduced. Then hot issues and difficulties of different scales are described respectively. Some examples are also given, such as Micron-scale phenotyping quantification and 3D microstructure reconstruction of vascular bundles within maize stalks based on micro-CT scanning, 3D reconstruction of leaf surfaces and feature extraction from point cloud acquired by using 3D handheld scanner, plant modeling by combining parameter driven 3D organ templates. Several application examples by using the 3D models and analysis results of plants are also introduced. A 3D maize canopy was constructed, and light distribution was simulated within the canopy, which was used for the designation of ideal plant type. A grape tree model was constructed from 3D digital and point cloud data, which was used for the production of science content of 11th international conference on grapevine breeding and genetics. By using the tissue models of plants, a Google glass was used to look around visually inside the plant to understand the internal structure of plants. With the development of information technology, 3D data acquisition, and data processing techniques will play a greater role in plant science.Keywords: plant, three dimensional modeling, multi-scale, plant phenotyping, three dimensional data acquisition
Procedia PDF Downloads 27841799 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: recurrent neural network, players lineup, basketball data, decision making model
Procedia PDF Downloads 13441798 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome
Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco
Abstract:
Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index
Procedia PDF Downloads 13541797 Gendered Labelling and Its Effects on Vhavenda Women
Authors: Matodzi Rapalalani
Abstract:
In context with Spencer's (2018) classic labelling theory, labels influence the perceptions of both the individual and other members of society. That is, once labelled, the individual act in ways that confirm the stereotypes attached to the label. This study, therefore, investigates the understanding of gendered labelling and its effects on Vhavenda women. Gender socialization and patriarchy have been viewed as the core causes of the problem. The literature presented the development of gendered labelling, forms of it, and other aspects. A qualitative method of data collection was used in this study, and semi-structural interviews were conducted. A total of 6 participants were used as it is easy to deal with a small sample. Thematic analysis was used as the data was interpreted and analyzed. Ethical issues such as confidentiality, informed consent, and voluntary participation were considered. Through the analysis and data interpretation, causes such as lack of Christian values, insecurities, and lust were mentioned as well as some of the effects such as frustrations, increased divorce, and low self-esteem.Keywords: gender, naming, Venda, women, African culture
Procedia PDF Downloads 9141796 Empirical and Indian Automotive Equity Portfolio Decision Support
Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu
Abstract:
A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis
Procedia PDF Downloads 48641795 A Model to Assist Military Mission Planners in Identifying and Assessing Variables Impacting Food Security
Authors: Lynndee Kemmet
Abstract:
The U.S. military plays an increasing role in supporting political stability efforts, and this includes efforts to prevent the food insecurity that can trigger political and social instability. This paper presents a model that assists military commanders in identifying variables that impact food production and distribution in their areas of operation (AO), in identifying connections between variables and in assessing the impacts of those variables on food production and distribution. Through use of the model, military units can better target their data collection efforts and can categorize and analyze data within the data categorization framework most widely-used by military forces—PMESII-PT (Political, Military, Economic, Infrastructure, Information, Physical Environment and Time). The model provides flexibility of analysis in that commanders can target analysis to be highly focused on a specific PMESII-PT domain or variable or conduct analysis across multiple PMESII-PT domains. The model is also designed to assist commanders in mapping food systems in their AOs and then identifying components of those systems that must be strengthened or protected.Keywords: food security, food system model, political stability, US Military
Procedia PDF Downloads 196