Search results for: data visualization
25107 Hydrodynamics and Heat Transfer Characteristics of a Solar Thermochemical Fluidized Bed Reactor
Authors: Selvan Bellan, Koji Matsubara, Nobuyuki Gokon, Tatsuya Kodama, Hyun Seok-Cho
Abstract:
In concentrated solar thermal industry, fluidized-bed technology has been used to produce hydrogen by thermochemical two step water splitting cycles, and synthetic gas by gasification of coal coke. Recently, couple of fluidized bed reactors have been developed and tested at Niigata University, Japan, for two-step thermochemical water splitting cycles and coal coke gasification using Xe light, solar simulator. The hydrodynamic behavior of the gas-solid flow plays a vital role in the aforementioned fluidized bed reactors. Thus, in order to study the dynamics of dense gas-solid flow, a CFD-DEM model has been developed; in which the contact forces between the particles have been calculated by the spring-dashpot model, based on the soft-sphere method. Heat transfer and hydrodynamics of a solar thermochemical fluidized bed reactor filled with ceria particles have been studied numerically and experimentally for beam-down solar concentrating system. An experimental visualization of particles circulation pattern and mixing of two-tower fluidized bed system has been presented. Simulation results have been compared with experimental data to validate the CFD-DEM model. Results indicate that the model can predict the particle-fluid flow of the two-tower fluidized bed reactor. Using this model, the key operating parameters can be optimized.Keywords: solar reactor, CFD-DEM modeling, fluidized bed, beam-down solar concentrating system
Procedia PDF Downloads 19725106 Gas-Liquid Flow Regimes in Vertical Venturi Downstream of Horizontal Blind-Tee
Authors: Muhammad Alif Bin Razali, Cheng-Gang Xie, Wai Lam Loh
Abstract:
A venturi device is commonly used as an integral part of a multiphase flowmeter (MPFM) in real-time oil-gas production monitoring. For an accurate determination of individual phase fraction and flowrate, a gas-liquid flow ideally needs to be well mixed in the venturi measurement section. Partial flow mixing is achieved by installing a venturi vertically downstream of the blind-tee pipework that ‘homogenizes’ the incoming horizontal gas-liquid flow. In order to study in-depth the flow-mixing effect of the blind-tee, gas-liquid flows are captured at blind-tee and venturi sections by using a high-speed video camera and a purpose-built transparent test rig, over a wide range of superficial liquid velocities (0.3 to 2.4m/s) and gas volume fractions (10 to 95%). Electrical capacitance sensors are built to measure the instantaneous holdup (of oil-gas flows) at the venturi inlet and throat. Flow regimes and flow (a)symmetry are investigated based on analyzing the statistical features of capacitance sensors’ holdup time-series data and of the high-speed video time-stacked images. The perceived homogenization effect of the blind-tee on the incoming intermittent horizontal flow regimes is found to be relatively small across the tested flow conditions. A horizontal (blind-tee) to vertical (venturi) flow-pattern transition map is proposed based on gas and liquid mass fluxes (weighted by the Baker parameters).Keywords: blind-tee, flow visualization, gas-liquid two-phase flow, MPFM
Procedia PDF Downloads 12725105 Data Mining Algorithms Analysis: Case Study of Price Predictions of Lands
Authors: Julio Albuja, David Zaldumbide
Abstract:
Data analysis is an important step before taking a decision about money. The aim of this work is to analyze the factors that influence the final price of the houses through data mining algorithms. To our best knowledge, previous work was researched just to compare results. Furthermore, before using the data of the data set, the Z-Transformation were used to standardize the data in the same range. Hence, the data was classified into two groups to visualize them in a readability format. A decision tree was built, and graphical data is displayed where clearly is easy to see the results and the factors' influence in these graphics. The definitions of these methods are described, as well as the descriptions of the results. Finally, conclusions and recommendations are presented related to the released results that our research showed making it easier to apply these algorithms using a customized data set.Keywords: algorithms, data, decision tree, transformation
Procedia PDF Downloads 37425104 Application of Blockchain Technology in Geological Field
Authors: Mengdi Zhang, Zhenji Gao, Ning Kang, Rongmei Liu
Abstract:
Management and application of geological big data is an important part of China's national big data strategy. With the implementation of a national big data strategy, geological big data management becomes more and more critical. At present, there are still a lot of technology barriers as well as cognition chaos in many aspects of geological big data management and application, such as data sharing, intellectual property protection, and application technology. Therefore, it’s a key task to make better use of new technologies for deeper delving and wider application of geological big data. In this paper, we briefly introduce the basic principle of blockchain technology at the beginning and then make an analysis of the application dilemma of geological data. Based on the current analysis, we bring forward some feasible patterns and scenarios for the blockchain application in geological big data and put forward serval suggestions for future work in geological big data management.Keywords: blockchain, intellectual property protection, geological data, big data management
Procedia PDF Downloads 8925103 Frequent Item Set Mining for Big Data Using MapReduce Framework
Authors: Tamanna Jethava, Rahul Joshi
Abstract:
Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.Keywords: frequent item set mining, big data, Hadoop, MapReduce
Procedia PDF Downloads 43525102 The Role Of Data Gathering In NGOs
Authors: Hussaini Garba Mohammed
Abstract:
Background/Significance: The lack of data gathering is affecting NGOs world-wide in general to have good data information about educational and health related issues among communities in any country and around the world. For example, HIV/AIDS smoking (Tuberculosis diseases) and COVID-19 virus carriers is becoming a serious public health problem, especially among old men and women. But there is no full details data survey assessment from communities, villages, and rural area in some countries to show the percentage of victims and patients, especial with this world COVID-19 virus among the people. These data are essential to inform programming targets, strategies, and priorities in getting good information about data gathering in any society.Keywords: reliable information, data assessment, data mining, data communication
Procedia PDF Downloads 17925101 The Application of Data Mining Technology in Building Energy Consumption Data Analysis
Authors: Liang Zhao, Jili Zhang, Chongquan Zhong
Abstract:
Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.Keywords: data mining, data analysis, prediction, optimization, building operational performance
Procedia PDF Downloads 85225100 To Handle Data-Driven Software Development Projects Effectively
Authors: Shahnewaz Khan
Abstract:
Machine learning (ML) techniques are often used in projects for creating data-driven applications. These tasks typically demand additional research and analysis. The proper technique and strategy must be chosen to ensure the success of data-driven projects. Otherwise, even exerting a lot of effort, the necessary development might not always be possible. In this post, an effort to examine the workflow of data-driven software development projects and its implementation process in order to describe how to manage a project successfully. Which will assist in minimizing the added workload.Keywords: data, data-driven projects, data science, NLP, software project
Procedia PDF Downloads 8325099 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 12825098 The Relationship Between Artificial Intelligence, Data Science, and Privacy
Authors: M. Naidoo
Abstract:
Artificial intelligence often requires large amounts of good quality data. Within important fields, such as healthcare, the training of AI systems predominately relies on health and personal data; however, the usage of this data is complicated by various layers of law and ethics that seek to protect individuals’ privacy rights. This research seeks to establish the challenges AI and data sciences pose to (i) informational rights, (ii) privacy rights, and (iii) data protection. To solve some of the issues presented, various methods are suggested, such as embedding values in technological development, proper balancing of rights and interests, and others.Keywords: artificial intelligence, data science, law, policy
Procedia PDF Downloads 10625097 Optimization of a Method of Total RNA Extraction from Mentha piperita
Authors: Soheila Afkar
Abstract:
Mentha piperita is a medicinal plant that contains a large amount of secondary metabolite that has adverse effect on RNA extraction. Since high quality of RNA is the first step to real time-PCR, in this study optimization of total RNA isolation from leaf tissues of Mentha piperita was evaluated. From this point of view, we researched two different total RNA extraction methods on leaves of Mentha piperita to find the best one that contributes the high quality. The methods tested are RNX-plus, modified RNX-plus (1-5 numbers). RNA quality was analyzed by agarose gel 1.5%. The RNA integrity was also assessed by visualization of ribosomal RNA bands on 1.5% agarose gels. In the modified RNX-plus method (number 2), the integrity of 28S and 18S rRNA was highly satisfactory when analyzed in agarose denaturing gel, so this method is suitable for RNA isolation from Mentha piperita.Keywords: Mentha piperita, polyphenol, polysaccharide, RNA extraction
Procedia PDF Downloads 19025096 Defect Localization and Interaction on Surfaces with Projection Mapping and Gesture Recognition
Authors: Qiang Wang, Hongyang Yu, MingRong Lai, Miao Luo
Abstract:
This paper presents a method for accurately localizing and interacting with known surface defects by overlaying patterns onto real-world surfaces using a projection system. Given the world coordinates of the defects, we project corresponding patterns onto the surfaces, providing an intuitive visualization of the specific defect locations. To enable users to interact with and retrieve more information about individual defects, we implement a gesture recognition system based on a pruned and optimized version of YOLOv6. This lightweight model achieves an accuracy of 82.8% and is suitable for deployment on low-performance devices. Our approach demonstrates the potential for enhancing defect identification, inspection processes, and user interaction in various applications.Keywords: defect localization, projection mapping, gesture recognition, YOLOv6
Procedia PDF Downloads 8825095 Higher Education Internationalisation: The Case of Indonesia
Authors: Agustinus Bandur, Dyah Budiastuti
Abstract:
With the rapid development of information and communication technology (ICT) in globalisation era, higher education (HE) internationalisation has become a worldwide phenomenon. However, even though various studies have been widely published in existing literature, the settings of these studies were taken places in developed countries. Accordingly, the major purpose of this article is to explore the current trends of higher education internationalisation programs with particular reference to identify the benefits and challenges confronted by participating staff and students. For these purposes, ethnographic qualitative study with the usage of NVivo 11 software was applied in coding, analyzing, and visualization of non-numeric data gathered from interviews, videos, web contents, social media, and relevant documents. Purposive sampling technique was applied in this study with a total of ten high-ranked accredited government and private universities in Indonesia. On the basis of thematic and cross-case analyses, this study indicates that while Australia has led other countries in dual-degree programs, partner universities from Japan and Korea have the most frequent collaboration on student exchange programs. Meanwhile, most visiting scholars who have collaborated with the universities in this study came from the US, the UK, Japan, Australia, Netherlands, and China. Other European countries such as Germany, French, and Norway have also conducted joint research with Indonesian universities involved in this study. This study suggests that further supports of government policy and grants are required to overcome the challenges as well as strategic leadership and management roles to achieve high impacts of such programs on higher education quality.Keywords: higher education, internationalisation, challenges, Indonesia
Procedia PDF Downloads 27025094 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 14425093 Algorithms used in Spatial Data Mining GIS
Authors: Vahid Bairami Rad
Abstract:
Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. Therefore, the development of new techniques and tools that support the human in transforming data into useful knowledge has been the focus of the relatively new and interdisciplinary research area ‘knowledge discovery in databases’. Thus, we introduce a set of database primitives or basic operations for spatial data mining which are sufficient to express most of the spatial data mining algorithms from the literature. This approach has several advantages. Similar to the relational standard language SQL, the use of standard primitives will speed-up the development of new data mining algorithms and will also make them more portable. We introduced a database-oriented framework for spatial data mining which is based on the concepts of neighborhood graphs and paths. A small set of basic operations on these graphs and paths were defined as database primitives for spatial data mining. Furthermore, techniques to efficiently support the database primitives by a commercial DBMS were presented.Keywords: spatial data base, knowledge discovery database, data mining, spatial relationship, predictive data mining
Procedia PDF Downloads 46025092 Data Stream Association Rule Mining with Cloud Computing
Authors: B. Suraj Aravind, M. H. M. Krishna Prasad
Abstract:
There exist emerging applications of data streams that require association rule mining, such as network traffic monitoring, web click streams analysis, sensor data, data from satellites etc. Data streams typically arrive continuously in high speed with huge amount and changing data distribution. This raises new issues that need to be considered when developing association rule mining techniques for stream data. This paper proposes to introduce an improved data stream association rule mining algorithm by eliminating the limitation of resources. For this, the concept of cloud computing is used. Inclusion of this may lead to additional unknown problems which needs further research.Keywords: data stream, association rule mining, cloud computing, frequent itemsets
Procedia PDF Downloads 50125091 A Comprehensive Survey and Improvement to Existing Privacy Preserving Data Mining Techniques
Authors: Tosin Ige
Abstract:
Ethics must be a condition of the world, like logic. (Ludwig Wittgenstein, 1889-1951). As important as data mining is, it possess a significant threat to ethics, privacy, and legality, since data mining makes it difficult for an individual or consumer (in the case of a company) to control the accessibility and usage of his data. This research focuses on Current issues and the latest research and development on Privacy preserving data mining methods as at year 2022. It also discusses some advances in those techniques while at the same time highlighting and providing a new technique as a solution to an existing technique of privacy preserving data mining methods. This paper also bridges the wide gap between Data mining and the Web Application Programing Interface (web API), where research is urgently needed for an added layer of security in data mining while at the same time introducing a seamless and more efficient way of data mining.Keywords: data, privacy, data mining, association rule, privacy preserving, mining technique
Procedia PDF Downloads 17325090 Big Data: Concepts, Technologies and Applications in the Public Sector
Authors: A. Alexandru, C. A. Alexandru, D. Coardos, E. Tudora
Abstract:
Big Data (BD) is associated with a new generation of technologies and architectures which can harness the value of extremely large volumes of very varied data through real time processing and analysis. It involves changes in (1) data types, (2) accumulation speed, and (3) data volume. This paper presents the main concepts related to the BD paradigm, and introduces architectures and technologies for BD and BD sets. The integration of BD with the Hadoop Framework is also underlined. BD has attracted a lot of attention in the public sector due to the newly emerging technologies that allow the availability of network access. The volume of different types of data has exponentially increased. Some applications of BD in the public sector in Romania are briefly presented.Keywords: big data, big data analytics, Hadoop, cloud
Procedia PDF Downloads 31025089 Development of a Web Exploration Support System Focusing on Accumulation of Search Contexts
Authors: T. Yamazaki, R. Onuma, H. Kaminaga, Y. Miyadera, S. Nakamura
Abstract:
Web exploration has increasingly diversified in accordance with the development of browsing environments on the Internet. Moreover, advanced exploration often conducted in intellectual activities such as surveys in research activities. This kind of exploration is conducted for a long period with trials and errors. In such a case, it is extremely important for a user to accumulate the search contexts and understand them. However, existing support systems were not effective enough since most systems could not handle the various factors involved in the exploration. This research aims to develop a novel system to support web exploration focusing on the accumulation of the search contexts. This paper mainly describes the outline of the system. An experiment using the system is also described. Finally, features of the system are discussed based on the results.Keywords: web exploration context, refinement of search intention, accumulation of context, exploration support, information visualization
Procedia PDF Downloads 30925088 The Interventricular Septum as a Site for Implantation of Electrocardiac Devices - Clinical Implications of Topography and Variation in Position
Authors: Marcin Jakiel, Maria Kurek, Karolina Gutkowska, Sylwia Sanakiewicz, Dominika Stolarczyk, Jakub Batko, Rafał Jakiel, Mateusz K. Hołda
Abstract:
Proper imaging of the interventricular septum during endocavital lead implantation is essential for successful procedure. The interventricular septum is located oblique to the 3 main body planes and forms angles of 44.56° ± 7.81°, 45.44° ± 7.81°, 62.49° (IQR 58.84° - 68.39°) with the sagittal, frontal and transverse planes, respectively. The optimal left anterior oblique (LAO) projection is to have the septum aligned along the radiation beam and will be obtained for an angle of 53.24° ± 9,08°, while the best visualization of the septal surface in the right anterior oblique (RAO) projection is obtained by using an angle of 45.44° ± 7.81°. In addition, the RAO angle (p=0.003) and the septal slope to the transverse plane (p=0.002) are larger in the male group, but the LAO angle (p=0.003) and the dihedral angle that the septum forms with the sagittal plane (p=0.003) are smaller, compared to the female group. Analyzing the optimal RAO angle in cross-sections lying at the level of the connections of the septum with the free wall of the right ventricle from the front and back, we obtain slightly smaller angle values, i.e. 41.11° ± 8.51° and 43.94° ± 7.22°, respectively. As the septum is directed leftward in the apical region, the optimal RAO angle for this area decreases (16.49° ± 7,07°) and does not show significant differences between the male and female groups (p=0.23). Within the right ventricular apex, there is a cavity formed by the apical segment of the interventricular septum and the free wall of the right ventricle with a depth of 12.35mm (IQR 11.07mm - 13.51mm). The length of the septum measured in longitudinal section, containing 4 heart cavities, is 73.03mm ± 8.06mm. With the left ventricular septal wall formed by the interventricular septum in the apical region at a length of 10.06mm (IQR 8.86 - 11.07mm) already lies outside the right ventricle. Both mentioned lengths are significantly larger in the male group (p<0.001). For proper imaging of the septum from the right ventricular side, an oblique position of the visualization devices is necessary. Correct determination of the RAO and LAO angle during the procedure allows to improve the procedure performed, and possible modification of the visual field when moving in the anterior, posterior and apical directions of the septum will avoid complications. Overlooking the change in the direction of the interventricular septum in the apical region and a significant decrease in the RAO angle can result in implantation of the lead into the free wall of the right ventricle with less effective pacing and even complications such as wall perforation and cardiac tamponade. The demonstrated gender differences can also be helpful in setting the right projections. A necessary addition to the analysis will be a description of the area of the ventricular septum, which we are currently working on using autopsy material.Keywords: anatomical variability, angle, electrocardiological procedure, intervetricular septum
Procedia PDF Downloads 9925087 Semantic Data Schema Recognition
Authors: Aïcha Ben Salem, Faouzi Boufares, Sebastiao Correia
Abstract:
The subject covered in this paper aims at assisting the user in its quality approach. The goal is to better extract, mix, interpret and reuse data. It deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.Keywords: schema recognition, semantic data profiling, meta-categorisation, semantic dependencies inter columns
Procedia PDF Downloads 41825086 Access Control System for Big Data Application
Authors: Winfred Okoe Addy, Jean Jacques Dominique Beraud
Abstract:
Access control systems (ACs) are some of the most important components in safety areas. Inaccuracies of regulatory frameworks make personal policies and remedies more appropriate than standard models or protocols. This problem is exacerbated by the increasing complexity of software, such as integrated Big Data (BD) software for controlling large volumes of encrypted data and resources embedded in a dedicated BD production system. This paper proposes a general access control strategy system for the diffusion of Big Data domains since it is crucial to secure the data provided to data consumers (DC). We presented a general access control circulation strategy for the Big Data domain by describing the benefit of using designated access control for BD units and performance and taking into consideration the need for BD and AC system. We then presented a generic of Big Data access control system to improve the dissemination of Big Data.Keywords: access control, security, Big Data, domain
Procedia PDF Downloads 13425085 A Data Envelopment Analysis Model in a Multi-Objective Optimization with Fuzzy Environment
Authors: Michael Gidey Gebru
Abstract:
Most of Data Envelopment Analysis models operate in a static environment with input and output parameters that are chosen by deterministic data. However, due to ambiguity brought on shifting market conditions, input and output data are not always precisely gathered in real-world scenarios. Fuzzy numbers can be used to address this kind of ambiguity in input and output data. Therefore, this work aims to expand crisp Data Envelopment Analysis into Data Envelopment Analysis with fuzzy environment. In this study, the input and output data are regarded as fuzzy triangular numbers. Then, the Data Envelopment Analysis model with fuzzy environment is solved using a multi-objective method to gauge the Decision Making Units' efficiency. Finally, the developed Data Envelopment Analysis model is illustrated with an application on real data 50 educational institutions.Keywords: efficiency, Data Envelopment Analysis, fuzzy, higher education, input, output
Procedia PDF Downloads 5725084 Localization of Geospatial Events and Hoax Prediction in the UFO Database
Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi
Abstract:
Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.Keywords: time-series clustering, feature extraction, hoax prediction, geospatial events
Procedia PDF Downloads 37625083 Proposal for a Monster Village in Namsan Mountain, Seoul: Significance from a Phenomenological Perspective
Authors: Hyuk-Jin Lee
Abstract:
Korea is a country with thousands of years of history, like its neighbors China and Japan. However, compared to China, which is famous for its ancient fantasy novel "Journey to the West", and Japan, which is famous for its monsters, its “monster culture” is not actively used for tourism. The reason is that the culture closest to the present, from the 17th to 20th centuries, was the Joseon Dynasty, when Neo-Confucianism, which suppressed a monster culture, was the strongest. This trend became stronger after Neo-Confucianism became dogmatic in the mid-17th century. However, Korea, which has a history of Taoism for thousands of years, clearly has many literatures on monsters that can be used as tourism resources. The problem is that these data are buried in texts and are unfamiliar even to Koreans. This study examines the possibility of developing them into attractive tourism resources based on the literary records of the so-called 'monsters densely located in Namsan Mountain, located in the center of Seoul' buried in texts from the 16th to early 17th centuries. In particular, we introduce the surprising consistency in the description of the area north of Namsan Mountain in terms of 'feng shui geography', an oriental philosophy, in a contemporary Korean newspaper. Finally, based on the theoretical foundation through the phenomenological classification table of cultural heritage, we examine phenomenologically how important this ‘visualization of imaginary or text-based entities’ is to changes in the perception of specific cultural resources in a society. In addition, we will deeply analyze related cases, including Japan's ninja culture.Keywords: monster culture, Namsan mountain, neo-confucianism, phenomenology, tourism
Procedia PDF Downloads 3225082 A Study on The Relationship between Building Façade and Solar Energy Utilization Potential in Urban Residential Area in West China
Authors: T. Wen, Y. Liu, J. Wang, W. Zheng, T. Shao
Abstract:
Along with the increasing density of urban population, solar energy potential of building facade in high-density residential areas become a question that needs to be addressed. This paper studies how the solar energy utilization potential of building facades in different locations of a residential areas changes with different building layouts and orientations in Xining, a typical city in west China which possesses large solar radiation resource. Solar energy potential of three typical building layouts of residential areas, which are parallel determinant, gable misalignment, transverse misalignment, are discussed in detail. First of all, through the data collection and statistics of Xining new residential area, the most representative building parameters are extracted, including building layout, building height, building layers, and building shape. Secondly, according to the results of building parameters extraction, a general model is established and analyzed with rhinoceros 6.0 and its own plug-in grasshopper. Finally, results of the various simulations and data analyses are presented in a visualized way. The results show that there are great differences in the solar energy potential of building facades in different locations of residential areas under three typical building layouts. Generally speaking, the solar energy potential of the west peripheral location is the largest, followed by the East peripheral location, and the middle location is the smallest. When the deflection angle is the same, the solar energy potential shows the result that the West deflection is greater than the East deflection. In addition, the optimal building azimuth range under these three typical building layouts is obtained. Within this range, the solar energy potential of the residential area can always maintain a high level. Beyond this range, the solar energy potential drops sharply. Finally, it is found that when the solar energy potential is maximum, the deflection angle is not positive south, but 5 °or 15°south by west. The results of this study can provide decision analysis basis for residential design of Xining city to improve solar energy utilization potential and provide a reference for solar energy utilization design of urban residential buildings in other similar areas.Keywords: building facade, solar energy potential, solar radiation, urban residential area, visualization, Xining city
Procedia PDF Downloads 17925081 The Economic Limitations of Defining Data Ownership Rights
Authors: Kacper Tomasz Kröber-Mulawa
Abstract:
This paper will address the topic of data ownership from an economic perspective, and examples of economic limitations of data property rights will be provided, which have been identified using methods and approaches of economic analysis of law. To properly build a background for the economic focus, in the beginning a short perspective of data and data ownership in the EU’s legal system will be provided. It will include a short introduction to its political and social importance and highlight relevant viewpoints. This will stress the importance of a Single Market for data but also far-reaching regulations of data governance and privacy (including the distinction of personal and non-personal data, data held by public bodies and private businesses). The main discussion of this paper will build upon the briefly referred to legal basis as well as methods and approaches of economic analysis of law.Keywords: antitrust, data, data ownership, digital economy, property rights
Procedia PDF Downloads 8125080 Protecting the Cloud Computing Data Through the Data Backups
Authors: Abdullah Alsaeed
Abstract:
Virtualized computing and cloud computing infrastructures are no longer fuzz or marketing term. They are a core reality in today’s corporate Information Technology (IT) organizations. Hence, developing an effective and efficient methodologies for data backup and data recovery is required more than any time. The purpose of data backup and recovery techniques are to assist the organizations to strategize the business continuity and disaster recovery approaches. In order to accomplish this strategic objective, a variety of mechanism were proposed in the recent years. This research paper will explore and examine the latest techniques and solutions to provide data backup and restoration for the cloud computing platforms.Keywords: data backup, data recovery, cloud computing, business continuity, disaster recovery, cost-effective, data encryption.
Procedia PDF Downloads 8725079 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area
Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim
Abstract:
In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.Keywords: data estimation, link data, machine learning, road network
Procedia PDF Downloads 51025078 Navigating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management
Authors: Derrick Mirindi, Frederic Mirindi, Oluwakemi Oshineye
Abstract:
The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software
Procedia PDF Downloads 83