Search results for: data mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25729

Search results for: data mapping

24649 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 513
24648 Map Matching Performance under Various Similarity Metrics for Heterogeneous Robot Teams

Authors: M. C. Akay, A. Aybakan, H. Temeltas

Abstract:

Aerial and ground robots have various advantages of usage in different missions. Aerial robots can move quickly and get a different sight of view of the area, but those vehicles cannot carry heavy payloads. On the other hand, unmanned ground vehicles (UGVs) are slow moving vehicles, since those can carry heavier payloads than unmanned aerial vehicles (UAVs). In this context, we investigate the performances of various Similarity Metrics to provide a common map for Heterogeneous Robot Team (HRT) in complex environments. Within the usage of Lidar Odometry and Octree Mapping technique, the local 3D maps of the environment are gathered.  In order to obtain a common map for HRT, informative theoretic similarity metrics are exploited. All types of these similarity metrics gave adequate as allowable simulation time and accurate results that can be used in different types of applications. For the heterogeneous multi robot team, those methods can be used to match different types of maps.

Keywords: common maps, heterogeneous robot team, map matching, informative theoretic similarity metrics

Procedia PDF Downloads 168
24647 Using Implicit Data to Improve E-Learning Systems

Authors: Slah Alsaleh

Abstract:

In the recent years and with popularity of internet and technology, e-learning became a major part of majority of education systems. One of the advantages the e-learning systems provide is the large amount of information available about the students' behavior while communicating with the e-learning system. Such information is very rich and it can be used to improve the capability and efficiency of e-learning systems. This paper discusses how e-learning can benefit from implicit data in different ways including; creating homogeneous groups of student, evaluating students' learning, creating behavior profiles for students and identifying the students through their behaviors.

Keywords: e-learning, implicit data, user behavior, data mining

Procedia PDF Downloads 310
24646 Enabling Quantitative Urban Sustainability Assessment with Big Data

Authors: Changfeng Fu

Abstract:

Sustainable urban development has been widely accepted a common sense in the modern urban planning and design. However, the measurement and assessment of urban sustainability, especially the quantitative assessment have been always an issue obsessing planning and design professionals. This paper will present an on-going research on the principles and technologies to develop a quantitative urban sustainability assessment principles and techniques which aim to integrate indicators, geospatial and geo-reference data, and assessment techniques together into a mechanism. It is based on the principles and techniques of geospatial analysis with GIS and statistical analysis methods. The decision-making technologies and methods such as AHP and SMART are also adopted to address overall assessment conclusions. The possible interfaces and presentation of data and quantitative assessment results are also described. This research is based on the knowledge, situations and data sources of UK, but it is potentially adaptable to other countries or regions. The implementation potentials of the mechanism are also discussed.

Keywords: urban sustainability assessment, quantitative analysis, sustainability indicator, geospatial data, big data

Procedia PDF Downloads 359
24645 Development of Generalized Correlation for Liquid Thermal Conductivity of N-Alkane and Olefin

Authors: A. Ishag Mohamed, A. A. Rabah

Abstract:

The objective of this research is to develop a generalized correlation for the prediction of thermal conductivity of n-Alkanes and Alkenes. There is a minority of research and lack of correlation for thermal conductivity of liquids in the open literature. The available experimental data are collected covering the groups of n-Alkanes and Alkenes.The data were assumed to correlate to temperature using Filippov correlation. Nonparametric regression of Grace Algorithm was used to develop the generalized correlation model. A spread sheet program based on Microsoft Excel was used to plot and calculate the value of the coefficients. The results obtained were compared with the data that found in Perry's Chemical Engineering Hand Book. The experimental data correlated to the temperature ranged "between" 273.15 to 673.15 K, with R2 = 0.99.The developed correlation reproduced experimental data that which were not included in regression with absolute average percent deviation (AAPD) of less than 7 %. Thus the spread sheet was quite accurate which produces reliable data.

Keywords: N-Alkanes, N-Alkenes, nonparametric, regression

Procedia PDF Downloads 654
24644 Survey on Arabic Sentiment Analysis in Twitter

Authors: Sarah O. Alhumoud, Mawaheb I. Altuwaijri, Tarfa M. Albuhairi, Wejdan M. Alohaideb

Abstract:

Large-scale data stream analysis has become one of the important business and research priorities lately. Social networks like Twitter and other micro-blogging platforms hold an enormous amount of data that is large in volume, velocity and variety. Extracting valuable information and trends out of these data would aid in a better understanding and decision-making. Multiple analysis techniques are deployed for English content. Moreover, one of the languages that produce a large amount of data over social networks and is least analyzed is the Arabic language. The proposed paper is a survey on the research efforts to analyze the Arabic content in Twitter focusing on the tools and methods used to extract the sentiments for the Arabic content on Twitter.

Keywords: big data, social networks, sentiment analysis, twitter

Procedia PDF Downloads 576
24643 Estimating Current Suicide Rates Using Google Trends

Authors: Ladislav Kristoufek, Helen Susannah Moat, Tobias Preis

Abstract:

Data on the number of people who have committed suicide tends to be reported with a substantial time lag of around two years. We examine whether online activity measured by Google searches can help us improve estimates of the number of suicide occurrences in England before official figures are released. Specifically, we analyse how data on the number of Google searches for the terms “depression” and “suicide” relate to the number of suicides between 2004 and 2013. We find that estimates drawing on Google data are significantly better than estimates using previous suicide data alone. We show that a greater number of searches for the term “depression” is related to fewer suicides, whereas a greater number of searches for the term “suicide” is related to more suicides. Data on suicide related search behaviour can be used to improve current estimates of the number of suicide occurrences.

Keywords: nowcasting, search data, Google Trends, official statistics

Procedia PDF Downloads 357
24642 Implementing Urban Rainwater Harvesting Systems: Between Policy and Practice

Authors: Natàlia Garcia Soler, Timothy Moss

Abstract:

Despite the multiple benefits of sustainable urban drainage, as demonstrated in numerous case studies across the world, urban rainwater harvesting techniques are generally restricted to isolated model projects. The leap from niche to mainstream has, in most cities, proved an elusive goal. Why policies promoting rainwater harvesting are limited in their widespread implementation has seldom been subjected to systematic analysis. Much of the literature on the policy, planning and institutional contexts of these techniques focus either on their potential benefits or on project design, but very rarely on a critical-constructive analysis of past experiences of implementation. Moreover, the vast majority of these contributions are restricted to single-case studies. There is a dearth of knowledge with respect to, firstly, policy implementation processes and, secondly, multi-case analysis. Insights from both, the authors argue, are essential to inform more effective rainwater harvesting in cities in the future. This paper presents preliminary findings from a research project on rainwater harvesting in cities from a social science perspective that is funded by the Swedish Research Foundation (Formas). This project – UrbanRain – is examining the challenges and opportunities of mainstreaming rainwater harvesting in three European cities. The paper addresses two research questions: firstly, what lessons can be learned on suitable policy incentives and planning instruments for rainwater harvesting from a meta-analysis of the relevant international literature and, secondly, how far these lessons are reflected in a study of past and ongoing rainwater harvesting projects in a European forerunner city. This two-tier approach frames the structure of the paper. We present, first, the results of the literature analysis on policy and planning issues of urban rainwater harvesting. Here, we analyze quantitatively and qualitatively the literature of the past 15 years on this topic in terms of thematic focus, issues addressed and key findings and draw conclusions on research gaps, highlighting the need for more studies on implementation factors, actor interests, institutional adaptation and multi-level governance. In a second step we focus in on the experiences of rainwater harvesting in Berlin and present the results of a mapping exercise on a wide variety of projects implemented there over the last 30 years. Here, we develop a typology to characterize the rainwater harvesting projects in terms of policy issues (what problems and goals are targeted), project design (which kind of solutions are envisaged), project implementation (how and when they were implemented), location (whether they are in new or existing urban developments) and actors (which stakeholders are involved and how), paying particular attention to the shifting institutional framework in Berlin. Mapping and categorizing these projects is based on a combination of document analysis and expert interviews. The paper concludes by synthesizing the findings, identifying how far the goals, governance structures and instruments applied in the Berlin projects studied reflect the findings emerging from the meta-analysis of the international literature on policy and planning issues of rainwater harvesting and what implications these findings have for mainstreaming such techniques in future practice.

Keywords: institutional framework, planning, policy, project implementation, urban rainwater management

Procedia PDF Downloads 287
24641 On the Network Packet Loss Tolerance of SVM Based Activity Recognition

Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir

Abstract:

In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.

Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss

Procedia PDF Downloads 475
24640 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company

Authors: Rahma Saleh Hussein Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, CMMS

Procedia PDF Downloads 125
24639 Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction

Authors: Sajid Abas, Jon Pyo Hong, Jung-Ryun Le, Seungryong Cho

Abstract:

Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.

Keywords: computed tomography, computed laminography, compressive sending, low-dose

Procedia PDF Downloads 464
24638 Generalized Dirac oscillators Associated to Non-Hermitian Quantum Mechanical Systems

Authors: Debjit Dutta, P. Roy, O. Panella

Abstract:

In recent years, non Hermitian interaction in non relativistic as well as relativistic quantum mechanics have been examined from various aspect. We can observe interesting fact that for such systems a class of potentials, namely the PT symmetric and η-pseudo Hermitian admit real eigenvalues despite being non Hermitian and analogues of those system have been experimentally verified. Point to be noted that relativistic non Hermitian (PT symmetric) interactions can be realized in optical structures and also there exists photonic realization of the (1 + 1) dimensional Dirac oscillator. We have thoroughly studied generalized Dirac oscillators with non Hermitian interactions in (1 + 1) dimensions. To be more specific, we have examined η pseudo Hermitian interactions within the framework of generalized Dirac oscillator in (1 + 1) dimensions. In particular, we have obtained a class of interactions which are η-pseudo Hermitian and the metric operator η could have been also found explicitly. It is possible to have exact solutions of the generalized Dirac oscillator for some choices of the interactions. Subsequently we have employed the mapping between the generalized Dirac oscillator and the Jaynes Cummings (JC) model by spin flip to obtain a class of exactly solvable non Hermitian JC as well as anti Jaynes Cummings (AJC) type models.

Keywords: Dirac oscillator, non-Hermitian quantum system, Hermitian, relativistic

Procedia PDF Downloads 459
24637 Fuzzy Wavelet Model to Forecast the Exchange Rate of IDR/USD

Authors: Tri Wijayanti Septiarini, Agus Maman Abadi, Muhammad Rifki Taufik

Abstract:

The exchange rate of IDR/USD can be the indicator to analysis Indonesian economy. The exchange rate as a important factor because it has big effect in Indonesian economy overall. So, it needs the analysis data of exchange rate. There is decomposition data of exchange rate of IDR/USD to be frequency and time. It can help the government to monitor the Indonesian economy. This method is very effective to identify the case, have high accurate result and have simple structure. In this paper, data of exchange rate that used is weekly data from December 17, 2010 until November 11, 2014.

Keywords: the exchange rate, fuzzy mamdani, discrete wavelet transforms, fuzzy wavelet

Procedia PDF Downloads 571
24636 Humanising Digital Healthcare to Build Capacity by Harnessing the Power of Patient Data

Authors: Durhane Wong-Rieger, Kawaldip Sehmi, Nicola Bedlington, Nicole Boice, Tamás Bereczky

Abstract:

Patient-generated health data should be seen as the expression of the experience of patients, including the outcomes reflecting the impact a treatment or service had on their physical health and wellness. We discuss how the healthcare system can reach a place where digital is a determinant of health - where data is generated by patients and is respected and which acknowledges their contribution to science. We explore the biggest barriers facing this. The International Experience Exchange with Patient Organisation’s Position Paper is based on a global patient survey conducted in Q3 2021 that received 304 responses. Results were discussed and validated by the 15 patient experts and supplemented with literature research. Results are a subset of this. Our research showed patient communities want to influence how their data is generated, shared, and used. Our study concludes that a reasonable framework is needed to protect the integrity of patient data and minimise abuse, and build trust. Results also demonstrated a need for patient communities to have more influence and control over how health data is generated, shared, and used. The results clearly highlight that the community feels there is a lack of clear policies on sharing data.

Keywords: digital health, equitable access, humanise healthcare, patient data

Procedia PDF Downloads 82
24635 Use of Machine Learning in Data Quality Assessment

Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho

Abstract:

Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.

Keywords: machine learning, data quality, quality dimension, quality assessment

Procedia PDF Downloads 148
24634 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges

Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars

Abstract:

In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.

Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting

Procedia PDF Downloads 153
24633 Merchants’ Attitudes towards Tourism Development in Mahane Yehuda Market: A Case Study

Authors: Rotem Mashkov, Noam Shoval

Abstract:

In an age when a tourist’s gaze is more focused on the daily lives of locals, it is evident that local food markets are being rediscovered. Traditional urban markets succeed in reinventing themselves as a space for consumption, recreation, and culture, enabling authentic experiences and interpersonal interactions with the local culture. Alongside this, the pressure of tourism development may result in commercialization and retail gentrification to the point of losing the sense of local identity. The issue of finding a balance between tourism development and the preservation of unique local features is at the heart of this study and is being tested using the case of the Mahane Yehuda market in Jerusalem. The research question—how merchants respond to tourism development in the Mahane Yehuda food market— focuses on local traders, a group of players who are usually absent from the research arenas, although they influence tourism development as well as influenced by it. Three main research methods were integrated into this study. The first two methods, a survey of articles survey and comparative mapping of the business mix, were used to characterize the changes in the Mahane Yehuda market both consciously and physically. The third research method, involving in-depth interviews with merchants, was used to examine the traders' attitudes and responses to tourism development. The findings indicate that there has been a turnaround in the market image over the past decade and a half. Additionally, there has been a significant physical change in the business mix, reflected by a decline of 15% in the number of stalls selling food products and delicacies. The data from the interviews on the traders’ attitudes towards tourism development were inconclusive; there were disagreements among the traders about the economic contribution of tourism development in relation to their dependence on the tourism industry. However, there was a consensus on the need for authentic elements in the marketplace. The findings of the study also indicate a strong link between the merchants’ response to tourism development and their stall ownership status as the merchant could exercise their position in various ways depending on the possession type.

Keywords: business mix, Jerusalem, local food markets, Mahane Yehuda market, merchants’ attitude, ownership status, retail gentrification, tourism development, traditional urban markets

Procedia PDF Downloads 135
24632 Improved Throttled Load Balancing Approach for Cloud Environment

Authors: Sushant Singh, Anurag Jain, Seema Sabharwal

Abstract:

Cloud computing is advancing with a rapid speed. Already, it has been adopted by a huge set of users. Easy to use and anywhere access like potential of cloud computing has made it more attractive relative to other technologies. This has resulted in reduction of deployment cost on user side. It has also allowed the big companies to sell their infrastructure to recover the installation cost for the organization. Roots of cloud computing have extended from Grid computing. Along with the inherited characteristics of its predecessor technologies it has also adopted the loopholes present in those technologies. Some of the loopholes are identified and corrected recently, but still some are yet to be rectified. Two major areas where still scope of improvement exists are security and performance. The proposed work is devoted to performance enhancement for the user of the existing cloud system by improving the basic throttled mapping approach between task and resources. The improved procedure has been tested using the cloud analyst simulator. The results are compared with the original and it has been found that proposed work is one step ahead of existing techniques.

Keywords: cloud analyst, cloud computing, load balancing, throttled

Procedia PDF Downloads 249
24631 Nuclear Decay Data Evaluation for 217Po

Authors: S. S. Nafee, A. M. Al-Ramady, S. A. Shaheen

Abstract:

Evaluated nuclear decay data for the 217Po nuclide ispresented in the present work. These data include recommended values for the half-life T1/2, α-, β--, and γ-ray emission energies and probabilities. Decay data from 221Rn α and 217Bi β—decays are presented. Q(α) has been updated based on the recent published work of the Atomic Mass Evaluation AME2012. In addition, the logft values were calculated using the Logft program from the ENSDF evaluation package. Moreover, the total internal conversion electrons has been calculated using Bricc program. Meanwhile, recommendation values or the multi-polarities have been assigned based on recently measurement yield a better intensity balance at the 254 keV and 264 keV gamma transitions.

Keywords: nuclear decay data evaluation, mass evaluation, total converison coefficients, atomic mass evaluation

Procedia PDF Downloads 433
24630 Time-Dependent Behaviour of Reinforced Concrete Beams under Sustained and Repeated Loading

Authors: Sultan Daud, John P. Forth, Nikolaos Nikitas

Abstract:

The current study aims to highlight the loading characteristics impact on the time evolution (focusing particularly on long term effects) of the deformation of realized reinforced concrete beams. Namely the tension stiffening code provisions (i.e. within Eurocode 2) are reviewed with a clear intention to reassess their operational value and predicting capacity. In what follows the experimental programme adopted along with some preliminary findings and numerical modelling attempts are presented. For a range of long slender reinforced concrete simply supported beams (4200 mm) constant static sustained and repeated cyclic loadings were applied mapping the time evolution of deformation. All experiments were carried out at the Heavy Structures Lab of the University of Leeds. During tests the mid-span deflection, creep coefficient and shrinkage strains were monitored for duration of 90 days. The obtained results are set against the values predicted by Eurocode 2 and the tools within an FE commercial package (i.e. Midas FEA) to yield that existing knowledge and practise is at times over-conservative.

Keywords: Eurocode2, midas fea, repeated, sustained loading.

Procedia PDF Downloads 347
24629 Inclusive Practices in Health Sciences: Equity Proofing Higher Education Programs

Authors: Mitzi S. Brammer

Abstract:

Given that the cultural make-up of programs of study in institutions of higher learning is becoming increasingly diverse, much has been written about cultural diversity from a university-level perspective. However, there are little data in the way of specific programs and how they address inclusive practices when teaching and working with marginalized populations. This research study aimed to discover baseline knowledge and attitudes of health sciences faculty, instructional staff, and students related to inclusive teaching/learning and interactions. Quantitative data were collected via an anonymous online survey (one designed for students and another designed for faculty/instructional staff) using a web-based program called Qualtrics. Quantitative data were analyzed amongst the faculty/instructional staff and students, respectively, using descriptive and comparative statistics (t-tests). Additionally, some participants voluntarily engaged in a focus group discussion in which qualitative data were collected around these same variables. Collecting qualitative data to triangulate the quantitative data added trustworthiness to the overall data. The research team analyzed collected data and compared identified categories and trends, comparing those data between faculty/staff and students, and reported results as well as implications for future study and professional practice.

Keywords: inclusion, higher education, pedagogy, equity, diversity

Procedia PDF Downloads 67
24628 An Analysis of Sequential Pattern Mining on Databases Using Approximate Sequential Patterns

Authors: J. Suneetha, Vijayalaxmi

Abstract:

Sequential Pattern Mining involves applying data mining methods to large data repositories to extract usage patterns. Sequential pattern mining methodologies used to analyze the data and identify patterns. The patterns have been used to implement efficient systems can recommend on previously observed patterns, in making predictions, improve usability of systems, detecting events, and in general help in making strategic product decisions. In this paper, identified performance of approximate sequential pattern mining defines as identifying patterns approximately shared with many sequences. Approximate sequential patterns can effectively summarize and represent the databases by identifying the underlying trends in the data. Conducting an extensive and systematic performance over synthetic and real data. The results demonstrate that ApproxMAP effective and scalable in mining large sequences databases with long patterns.

Keywords: multiple data, performance analysis, sequential pattern, sequence database scalability

Procedia PDF Downloads 342
24627 Medical Knowledge Management since the Integration of Heterogeneous Data until the Knowledge Exploitation in a Decision-Making System

Authors: Nadjat Zerf Boudjettou, Fahima Nader, Rachid Chalal

Abstract:

Knowledge management is to acquire and represent knowledge relevant to a domain, a task or a specific organization in order to facilitate access, reuse and evolution. This usually means building, maintaining and evolving an explicit representation of knowledge. The next step is to provide access to that knowledge, that is to say, the spread in order to enable effective use. Knowledge management in the medical field aims to improve the performance of the medical organization by allowing individuals in the care facility (doctors, nurses, paramedics, etc.) to capture, share and apply collective knowledge in order to make optimal decisions in real time. In this paper, we propose a knowledge management approach based on integration technique of heterogeneous data in the medical field by creating a data warehouse, a technique of extracting knowledge from medical data by choosing a technique of data mining, and finally an exploitation technique of that knowledge in a case-based reasoning system.

Keywords: data warehouse, data mining, knowledge discovery in database, KDD, medical knowledge management, Bayesian networks

Procedia PDF Downloads 395
24626 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction

Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour

Abstract:

In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.

Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift

Procedia PDF Downloads 315
24625 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company

Authors: Rahma Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS

Procedia PDF Downloads 207
24624 Importance of Ethics in Cloud Security

Authors: Pallavi Malhotra

Abstract:

This paper examines the importance of ethics in cloud computing. In the modern society, cloud computing is offering individuals and businesses an unlimited space for storing and processing data or information. Most of the data and information stored in the cloud by various users such as banks, doctors, architects, engineers, lawyers, consulting firms, and financial institutions among others require a high level of confidentiality and safeguard. Cloud computing offers centralized storage and processing of data, and this has immensely contributed to the growth of businesses and improved sharing of information over the internet. However, the accessibility and management of data and servers by a third party raise concerns regarding the privacy of clients’ information and the possible manipulations of the data by third parties. This document suggests the approaches various stakeholders should take to address various ethical issues involving cloud-computing services. Ethical education and training is key to all stakeholders involved in the handling of data and information stored or being processed in the cloud.

Keywords: IT ethics, cloud computing technology, cloud privacy and security, ethical education

Procedia PDF Downloads 325
24623 The Feminism of Data Privacy and Protection in Africa

Authors: Olayinka Adeniyi, Melissa Omino

Abstract:

The field of data privacy and data protection in Africa is still an evolving area, with many African countries yet to enact legislation on the subject. While African Governments are bringing their legislation to speed in this field, how patriarchy pervades every sector of African thought and manifests in society needs to be considered. Moreover, the laws enacted ought to be inclusive, especially towards women. This, in a nutshell, is the essence of data feminism. Data feminism is a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism. Feminising data privacy and protection will involve thinking women, considering women in the issues of data privacy and protection, particularly in legislation, as is the case in this paper. The line of thought of women inclusion is not uncommon when even international and regional human rights specific for women only came long after the general human rights. The consideration is that these should have been inserted or rather included in the original general instruments in the first instance. Since legislation on data privacy is coming in this century, having seen the rights and shortcomings of earlier instruments, then the cue should be taken to ensure inclusive wholistic legislation for data privacy and protection in the first instance. Data feminism is arguably an area that has been scantily researched, albeit a needful one. With the spate of increase in the violence against women spiraling in the cyber world, compounding the issue of COVID-19 and the needful response of governments, and the effect of these on women and their rights, fast forward, the research on the feminism of data privacy and protection in Africa becomes inevitable. This paper seeks to answer the questions, what is data feminism in the African context, why is it important in the issue of data privacy and protection legislation; what are the laws, if any, existing on data privacy and protection in Africa, are they women inclusive, if not, why; what are the measures put in place for the privacy and protection of women in Africa, and how can this be made possible. The paper aims to investigate the issue of data privacy and protection in Africa, the legal framework, and the protection or provision that it has for women if any. It further aims to research the importance and necessity of feminizing data privacy and protection, the effect of lack of it, the challenges or bottlenecks in attaining this feat and the possibilities of accessing data privacy and protection for African women. The paper also researches the emerging practices of data privacy and protection of women in other jurisprudences. It approaches the research through the methodology of review of papers, analysis of laws, and reports. It seeks to contribute to the existing literature in the field and is explorative in its suggestion. It suggests a draft of some clauses to make any data privacy and protection legislation women inclusive. It would be useful for policymaking, academic, and public enlightenment.

Keywords: feminism, women, law, data, Africa

Procedia PDF Downloads 205
24622 Evaluation of Practicality of On-Demand Bus Using Actual Taxi-Use Data through Exhaustive Simulations

Authors: Jun-ichi Ochiai, Itsuki Noda, Ryo Kanamori, Keiji Hirata, Hitoshi Matsubara, Hideyuki Nakashima

Abstract:

We conducted exhaustive simulations for data assimilation and evaluation of service quality for various setting in a new shared transportation system, called SAVS. Computational social simulation is a key technology to design recent social services like SAVS as new transportation service. One open issue in SAVS was to determine the service scale through the social simulation. Using our exhaustive simulation framework, OACIS, we did data-assimilation and evaluation of effects of SAVS based on actual tax-use data at Tajimi city, Japan. Finally, we get the conditions to realize the new service in a reasonable service quality.

Keywords: on-demand bus sytem, social simulation, data assimilation, exhaustive simulation

Procedia PDF Downloads 321
24621 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
24620 Laboratory Testing Regime for Quantifying Soil Collapsibility

Authors: Anne C. Okwedadi, Samson Ng’ambi, Ian Jefferson

Abstract:

Collapsible soils go through radical rearrangement of their particles when triggered by water, stress or/and vibration, causing loss of volume. This loss of volume in soil as seen in foundation failures has caused millions of dollars’ worth of damages to public facilities and infrastructure and so has an adverse effect on the society and people. Despite these consequences and the several studies that are available, more research is still required in the study of soil collapsibility. Discerning the pedogenesis (formation) of soils and investigating the combined effects of the different geological soil properties is key to elucidating and quantifying soils collapsibility. This study presents a novel laboratory testing regime that would be undertaken on soil samples where the effects of soil type, compactive variables (moisture content, density, void ratio, degree of saturation) and loading are analyzed. It is anticipated that results obtained would be useful in mapping the trend of the combined effect thus the basis for evaluating soil collapsibility or collapse potentials encountered in construction with volume loss problems attributed to collapse.

Keywords: collapsible soil, geomorphological process, soil collapsibility properties, soil test

Procedia PDF Downloads 471