Search results for: data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25320

Search results for: data mining

24210 Sourcing and Compiling a Maltese Traffic Dataset MalTra

Authors: Gabriele Borg, Alexei De Bono, Charlie Abela

Abstract:

There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale.

Keywords: Big Data, vehicular traffic, traffic management, mobile data patterns

Procedia PDF Downloads 104
24209 Comparative Study of Accuracy of Land Cover/Land Use Mapping Using Medium Resolution Satellite Imagery: A Case Study

Authors: M. C. Paliwal, A. K. Jain, S. K. Katiyar

Abstract:

Classification of satellite imagery is very important for the assessment of its accuracy. In order to determine the accuracy of the classified image, usually the assumed-true data are derived from ground truth data using Global Positioning System. The data collected from satellite imagery and ground truth data is then compared to find out the accuracy of data and error matrices are prepared. Overall and individual accuracies are calculated using different methods. The study illustrates advanced classification and accuracy assessment of land use/land cover mapping using satellite imagery. IRS-1C-LISS IV data were used for classification of satellite imagery. The satellite image was classified using the software in fourteen classes namely water bodies, agricultural fields, forest land, urban settlement, barren land and unclassified area etc. Classification of satellite imagery and calculation of accuracy was done by using ERDAS-Imagine software to find out the best method. This study is based on the data collected for Bhopal city boundaries of Madhya Pradesh State of India.

Keywords: resolution, accuracy assessment, land use mapping, satellite imagery, ground truth data, error matrices

Procedia PDF Downloads 503
24208 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 166
24207 Review for Identifying Online Opinion Leaders

Authors: Yu Wang

Abstract:

Nowadays, Internet enables its users to share the information online and to interact with others. Facing with numerous information, these Internet users are confused and begin to rely on the opinion leaders’ recommendations. The online opinion leaders are the individuals who have professional knowledge, who utilize the online channels to spread word-of-mouth information and who can affect the attitudes or even the behavior of their followers to some degree. Because utilizing the online opinion leaders is seen as an important approach to affect the potential consumers, how to identify them has become one of the hottest topics in the related field. Hence, in this article, the concepts and characteristics are introduced, and the researches related to identifying opinion leaders are collected and divided into three categories. Finally, the implications for future studies are provided.

Keywords: online opinion leaders, user attributes analysis, text mining analysis, network structure analysis

Procedia PDF Downloads 220
24206 Database Management System for Orphanages to Help Track of Orphans

Authors: Srivatsav Sanjay Sridhar, Asvitha Raja, Prathit Kalra, Soni Gupta

Abstract:

Database management is a system that keeps track of details about a person in an organisation. Not a lot of orphanages these days are shifting to a computer and program-based system, but unfortunately, most have only pen and paper-based records, which not only consumes space but it is also not eco-friendly. It comes as a hassle when one has to view a record of a person as they have to search through multiple records, and it will consume time. This program will organise all the data and can pull out any information about anyone whose data is entered. This is also a safe way of storage as physical data gets degraded over time or, worse, destroyed due to natural disasters. In this developing world, it is only smart enough to shift all data to an electronic-based storage system. The program comes with all features, including creating, inserting, searching, and deleting the data, as well as printing them.

Keywords: database, orphans, programming, C⁺⁺

Procedia PDF Downloads 150
24205 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 510
24204 Enabling Quantitative Urban Sustainability Assessment with Big Data

Authors: Changfeng Fu

Abstract:

Sustainable urban development has been widely accepted a common sense in the modern urban planning and design. However, the measurement and assessment of urban sustainability, especially the quantitative assessment have been always an issue obsessing planning and design professionals. This paper will present an on-going research on the principles and technologies to develop a quantitative urban sustainability assessment principles and techniques which aim to integrate indicators, geospatial and geo-reference data, and assessment techniques together into a mechanism. It is based on the principles and techniques of geospatial analysis with GIS and statistical analysis methods. The decision-making technologies and methods such as AHP and SMART are also adopted to address overall assessment conclusions. The possible interfaces and presentation of data and quantitative assessment results are also described. This research is based on the knowledge, situations and data sources of UK, but it is potentially adaptable to other countries or regions. The implementation potentials of the mechanism are also discussed.

Keywords: urban sustainability assessment, quantitative analysis, sustainability indicator, geospatial data, big data

Procedia PDF Downloads 354
24203 Development of Generalized Correlation for Liquid Thermal Conductivity of N-Alkane and Olefin

Authors: A. Ishag Mohamed, A. A. Rabah

Abstract:

The objective of this research is to develop a generalized correlation for the prediction of thermal conductivity of n-Alkanes and Alkenes. There is a minority of research and lack of correlation for thermal conductivity of liquids in the open literature. The available experimental data are collected covering the groups of n-Alkanes and Alkenes.The data were assumed to correlate to temperature using Filippov correlation. Nonparametric regression of Grace Algorithm was used to develop the generalized correlation model. A spread sheet program based on Microsoft Excel was used to plot and calculate the value of the coefficients. The results obtained were compared with the data that found in Perry's Chemical Engineering Hand Book. The experimental data correlated to the temperature ranged "between" 273.15 to 673.15 K, with R2 = 0.99.The developed correlation reproduced experimental data that which were not included in regression with absolute average percent deviation (AAPD) of less than 7 %. Thus the spread sheet was quite accurate which produces reliable data.

Keywords: N-Alkanes, N-Alkenes, nonparametric, regression

Procedia PDF Downloads 650
24202 Survey on Arabic Sentiment Analysis in Twitter

Authors: Sarah O. Alhumoud, Mawaheb I. Altuwaijri, Tarfa M. Albuhairi, Wejdan M. Alohaideb

Abstract:

Large-scale data stream analysis has become one of the important business and research priorities lately. Social networks like Twitter and other micro-blogging platforms hold an enormous amount of data that is large in volume, velocity and variety. Extracting valuable information and trends out of these data would aid in a better understanding and decision-making. Multiple analysis techniques are deployed for English content. Moreover, one of the languages that produce a large amount of data over social networks and is least analyzed is the Arabic language. The proposed paper is a survey on the research efforts to analyze the Arabic content in Twitter focusing on the tools and methods used to extract the sentiments for the Arabic content on Twitter.

Keywords: big data, social networks, sentiment analysis, twitter

Procedia PDF Downloads 573
24201 Estimating Current Suicide Rates Using Google Trends

Authors: Ladislav Kristoufek, Helen Susannah Moat, Tobias Preis

Abstract:

Data on the number of people who have committed suicide tends to be reported with a substantial time lag of around two years. We examine whether online activity measured by Google searches can help us improve estimates of the number of suicide occurrences in England before official figures are released. Specifically, we analyse how data on the number of Google searches for the terms “depression” and “suicide” relate to the number of suicides between 2004 and 2013. We find that estimates drawing on Google data are significantly better than estimates using previous suicide data alone. We show that a greater number of searches for the term “depression” is related to fewer suicides, whereas a greater number of searches for the term “suicide” is related to more suicides. Data on suicide related search behaviour can be used to improve current estimates of the number of suicide occurrences.

Keywords: nowcasting, search data, Google Trends, official statistics

Procedia PDF Downloads 352
24200 Bidirectional Encoder Representations from Transformers Sentiment Analysis Applied to Three Presidential Pre-Candidates in Costa Rica

Authors: Félix David Suárez Bonilla

Abstract:

A sentiment analysis service to detect polarity (positive, neural, and negative), based on transfer learning, was built using a Spanish version of BERT and applied to tweets written in Spanish. The dataset that was used consisted of 11975 reviews, which were extracted from Google Play using the google-play-scrapper package. The BETO trained model used: the AdamW optimizer, a batch size of 16, a learning rate of 2x10⁻⁵ and 10 epochs. The system was tested using tweets of three presidential pre-candidates from Costa Rica. The system was finally validated using human labeled examples, achieving an accuracy of 83.3%.

Keywords: NLP, transfer learning, BERT, sentiment analysis, social media, opinion mining

Procedia PDF Downloads 168
24199 Phytomining for Rare Earth Elements: A Comparative Life Cycle Assessment

Authors: Mohsen Rabbani, Trista McLaughlin, Ehsan Vahidi

Abstract:

the remediation of polluted sites with heavy metals, such as rare earth elements (REEs), has been a primary concern of researchers to decontaminate the soil. Among all developed methods to address this concern, phytoremediation has been established as efficient, cost-effective, easy-to-use, and environmentally friendly way, providing a long-term solution for addressing this global concern. Furthermore, this technology has another great potential application in the metals production sector through returning metals buried in soil via metals cropping. Considering the significant metal concentration in hyper-accumulators, the utilization of bioaccumulated metals to extract metals from plant matter has been proposed as a sub-economic area called phytomining. As a recent, more advanced technology to eliminate such pollutants from the soil and produce critical metals, bioharvesting (phytomining/agromining) has been considered another compromising way to produce metals and meet the global demand for critical/target metals. The bio-ore obtained from phytomining can be safely disposed of or introduced to metal production pathways to obtain the most demanded metals, such as REEs. It is well-known that some hyperaccumulators, e.g., fern Dicranopteris linearis, can be used to absorb REE metals from the polluted soils and accumulate them in plant organs, such as leaves and stems. After soil remediation, the plant species can be harvested and introduced to the downstream steps, namely crushing/grinding, leaching, and purification processes, to extract REEs from plant matter. This novel interdisciplinary field can fill the gap between agriculture, mining, metallurgy, and the environment. Despite the advantages of agromining for the REEs production industry, key issues related to the environmental sustainability of the entire life cycle of this new concept have not been assessed yet. Hence, a comparative life cycle assessment (LCA) study was conducted to quantify the environmental footprints of REEs phytomining. The current LCA study aims to estimate and calculate environmental effects associated with phytomining by considering critical factors, such as climate change, land use, and ozone depletion. The results revealed that phytomining is an easy-to-use and environmentally sustainable approach to either eliminate REEs from polluted sites or produce REEs, offering a new source of such metals production. This LCA research provides guidelines for researchers active in developing a reliable relationship between agriculture, mining, metallurgy, and the environment to encounter soil pollution and keep the earth green and clean.

Keywords: phytoremediation, phytomining, life cycle assessment, environmental impacts, rare earth elements, hyperaccumulator

Procedia PDF Downloads 67
24198 On the Network Packet Loss Tolerance of SVM Based Activity Recognition

Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir

Abstract:

In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.

Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss

Procedia PDF Downloads 473
24197 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company

Authors: Rahma Saleh Hussein Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, CMMS

Procedia PDF Downloads 120
24196 Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction

Authors: Sajid Abas, Jon Pyo Hong, Jung-Ryun Le, Seungryong Cho

Abstract:

Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.

Keywords: computed tomography, computed laminography, compressive sending, low-dose

Procedia PDF Downloads 460
24195 Fuzzy Wavelet Model to Forecast the Exchange Rate of IDR/USD

Authors: Tri Wijayanti Septiarini, Agus Maman Abadi, Muhammad Rifki Taufik

Abstract:

The exchange rate of IDR/USD can be the indicator to analysis Indonesian economy. The exchange rate as a important factor because it has big effect in Indonesian economy overall. So, it needs the analysis data of exchange rate. There is decomposition data of exchange rate of IDR/USD to be frequency and time. It can help the government to monitor the Indonesian economy. This method is very effective to identify the case, have high accurate result and have simple structure. In this paper, data of exchange rate that used is weekly data from December 17, 2010 until November 11, 2014.

Keywords: the exchange rate, fuzzy mamdani, discrete wavelet transforms, fuzzy wavelet

Procedia PDF Downloads 564
24194 Humanising Digital Healthcare to Build Capacity by Harnessing the Power of Patient Data

Authors: Durhane Wong-Rieger, Kawaldip Sehmi, Nicola Bedlington, Nicole Boice, Tamás Bereczky

Abstract:

Patient-generated health data should be seen as the expression of the experience of patients, including the outcomes reflecting the impact a treatment or service had on their physical health and wellness. We discuss how the healthcare system can reach a place where digital is a determinant of health - where data is generated by patients and is respected and which acknowledges their contribution to science. We explore the biggest barriers facing this. The International Experience Exchange with Patient Organisation’s Position Paper is based on a global patient survey conducted in Q3 2021 that received 304 responses. Results were discussed and validated by the 15 patient experts and supplemented with literature research. Results are a subset of this. Our research showed patient communities want to influence how their data is generated, shared, and used. Our study concludes that a reasonable framework is needed to protect the integrity of patient data and minimise abuse, and build trust. Results also demonstrated a need for patient communities to have more influence and control over how health data is generated, shared, and used. The results clearly highlight that the community feels there is a lack of clear policies on sharing data.

Keywords: digital health, equitable access, humanise healthcare, patient data

Procedia PDF Downloads 79
24193 Assessing Lithium Recovery from Secondary Sources

Authors: Carolina A. Santos, Alexandra B. Ribeiro

Abstract:

Climate change and environmental degradation are threats to humanity. Europe has been addressing these problems, namely through the Green Deal, with the use of batteries in mobility and energy fields. However, these require the use of critical raw materials, like lithium, which demand is estimated to grow 60 times in the next 30 years. Thus, it is fundamental to promote a circular economy with lithium recovery from secondary resources. These are nowadays key topics, which will be even more relevant in the future, so a new way to approach them is needed and must be encouraged. Therefore, one of our main goals is to analyse two methods of lithium retrieval from secondary sources, bioleaching, and electrodialysis, and assess them regarding their sustainability. The latest results show good efficiency of removal with both methods, even though there are some matrix interferences. Hence, further investment and research are needed in order to make this process sustainable and our society more circular.

Keywords: lithium, sustainable mining, social license to operate, bioleaching, electrodialysis

Procedia PDF Downloads 119
24192 Use of Machine Learning in Data Quality Assessment

Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho

Abstract:

Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.

Keywords: machine learning, data quality, quality dimension, quality assessment

Procedia PDF Downloads 144
24191 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges

Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars

Abstract:

In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.

Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting

Procedia PDF Downloads 149
24190 Nuclear Decay Data Evaluation for 217Po

Authors: S. S. Nafee, A. M. Al-Ramady, S. A. Shaheen

Abstract:

Evaluated nuclear decay data for the 217Po nuclide ispresented in the present work. These data include recommended values for the half-life T1/2, α-, β--, and γ-ray emission energies and probabilities. Decay data from 221Rn α and 217Bi β—decays are presented. Q(α) has been updated based on the recent published work of the Atomic Mass Evaluation AME2012. In addition, the logft values were calculated using the Logft program from the ENSDF evaluation package. Moreover, the total internal conversion electrons has been calculated using Bricc program. Meanwhile, recommendation values or the multi-polarities have been assigned based on recently measurement yield a better intensity balance at the 254 keV and 264 keV gamma transitions.

Keywords: nuclear decay data evaluation, mass evaluation, total converison coefficients, atomic mass evaluation

Procedia PDF Downloads 427
24189 Geographic Information System Using Google Fusion Table Technology for the Delivery of Disease Data Information

Authors: I. Nyoman Mahayasa Adiputra

Abstract:

Data in the field of health can be useful for the purposes of data analysis, one example of health data is disease data. Disease data is usually in a geographical plot in accordance with the area. Where the data was collected, in the city of Denpasar, Bali. Disease data report is still published in tabular form, disease information has not been mapped in GIS form. In this research, disease information in Denpasar city will be digitized in the form of a geographic information system with the smallest administrative area in the form of district. Denpasar City consists of 4 districts of North Denpasar, East Denpasar, West Denpasar and South Denpasar. In this research, we use Google fusion table technology for map digitization process, where this technology can facilitate from the administrator and from the recipient information. From the administrator side of the input disease, data can be done easily and quickly. From the receiving end of the information, the resulting GIS application can be published in a website-based application so that it can be accessed anywhere and anytime. In general, the results obtained in this study, divided into two, namely: (1) Geolocation of Denpasar and all of Denpasar districts, the process of digitizing the map of Denpasar city produces a polygon geolocation of each - district of Denpasar city. These results can be utilized in subsequent GIS studies if you want to use the same administrative area. (2) Dengue fever mapping in 2014 and 2015. Disease data used in this study is dengue fever case data taken in 2014 and 2015. Data taken from the profile report Denpasar Health Department 2015 and 2016. This mapping can be useful for the analysis of the spread of dengue hemorrhagic fever in the city of Denpasar.

Keywords: geographic information system, Google fusion table technology, delivery of disease data information, Denpasar city

Procedia PDF Downloads 125
24188 Inclusive Practices in Health Sciences: Equity Proofing Higher Education Programs

Authors: Mitzi S. Brammer

Abstract:

Given that the cultural make-up of programs of study in institutions of higher learning is becoming increasingly diverse, much has been written about cultural diversity from a university-level perspective. However, there are little data in the way of specific programs and how they address inclusive practices when teaching and working with marginalized populations. This research study aimed to discover baseline knowledge and attitudes of health sciences faculty, instructional staff, and students related to inclusive teaching/learning and interactions. Quantitative data were collected via an anonymous online survey (one designed for students and another designed for faculty/instructional staff) using a web-based program called Qualtrics. Quantitative data were analyzed amongst the faculty/instructional staff and students, respectively, using descriptive and comparative statistics (t-tests). Additionally, some participants voluntarily engaged in a focus group discussion in which qualitative data were collected around these same variables. Collecting qualitative data to triangulate the quantitative data added trustworthiness to the overall data. The research team analyzed collected data and compared identified categories and trends, comparing those data between faculty/staff and students, and reported results as well as implications for future study and professional practice.

Keywords: inclusion, higher education, pedagogy, equity, diversity

Procedia PDF Downloads 61
24187 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 49
24186 Context-Aware Point-Of-Interests Recommender Systems Using Integrated Sentiment and Network Analysis

Authors: Ho Yeon Park, Kyoung-Jae Kim

Abstract:

Recently, user’s interests for location-based social network service increases according to the advances of social web and location-based technologies. It may be easy to recommend preferred items if we can use user’s preference, context and social network information simultaneously. In this study, we propose context-aware POI (point-of-interests) recommender systems using location-based network analysis and sentiment analysis which consider context, social network information and implicit user’s preference score. We propose a context-aware POI recommendation system consisting of three sub-modules and an integrated recommendation system of them. First, we will develop a recommendation module based on network analysis. This module combines social network analysis and cluster-indexing collaboration filtering. Next, this study develops a recommendation module using social singular value decomposition (SVD) and implicit SVD. In this research, we will develop a recommendation module that can recommend preference scores based on the frequency of POI visits of user in POI recommendation process by using social and implicit SVD which can reflect implicit feedback in collaborative filtering. We also develop a recommendation module using them that can estimate preference scores based on the recommendation. Finally, this study will propose a recommendation module using opinion mining and emotional analysis using data such as reviews of POIs extracted from location-based social networks. Finally, we will develop an integration algorithm that combines the results of the three recommendation modules proposed in this research. Experimental results show the usefulness of the proposed model in relation to the recommended performance.

Keywords: sentiment analysis, network analysis, recommender systems, point-of-interests, business analytics

Procedia PDF Downloads 246
24185 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction

Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour

Abstract:

In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.

Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift

Procedia PDF Downloads 311
24184 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company

Authors: Rahma Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS

Procedia PDF Downloads 203
24183 Importance of Ethics in Cloud Security

Authors: Pallavi Malhotra

Abstract:

This paper examines the importance of ethics in cloud computing. In the modern society, cloud computing is offering individuals and businesses an unlimited space for storing and processing data or information. Most of the data and information stored in the cloud by various users such as banks, doctors, architects, engineers, lawyers, consulting firms, and financial institutions among others require a high level of confidentiality and safeguard. Cloud computing offers centralized storage and processing of data, and this has immensely contributed to the growth of businesses and improved sharing of information over the internet. However, the accessibility and management of data and servers by a third party raise concerns regarding the privacy of clients’ information and the possible manipulations of the data by third parties. This document suggests the approaches various stakeholders should take to address various ethical issues involving cloud-computing services. Ethical education and training is key to all stakeholders involved in the handling of data and information stored or being processed in the cloud.

Keywords: IT ethics, cloud computing technology, cloud privacy and security, ethical education

Procedia PDF Downloads 320
24182 The Feminism of Data Privacy and Protection in Africa

Authors: Olayinka Adeniyi, Melissa Omino

Abstract:

The field of data privacy and data protection in Africa is still an evolving area, with many African countries yet to enact legislation on the subject. While African Governments are bringing their legislation to speed in this field, how patriarchy pervades every sector of African thought and manifests in society needs to be considered. Moreover, the laws enacted ought to be inclusive, especially towards women. This, in a nutshell, is the essence of data feminism. Data feminism is a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism. Feminising data privacy and protection will involve thinking women, considering women in the issues of data privacy and protection, particularly in legislation, as is the case in this paper. The line of thought of women inclusion is not uncommon when even international and regional human rights specific for women only came long after the general human rights. The consideration is that these should have been inserted or rather included in the original general instruments in the first instance. Since legislation on data privacy is coming in this century, having seen the rights and shortcomings of earlier instruments, then the cue should be taken to ensure inclusive wholistic legislation for data privacy and protection in the first instance. Data feminism is arguably an area that has been scantily researched, albeit a needful one. With the spate of increase in the violence against women spiraling in the cyber world, compounding the issue of COVID-19 and the needful response of governments, and the effect of these on women and their rights, fast forward, the research on the feminism of data privacy and protection in Africa becomes inevitable. This paper seeks to answer the questions, what is data feminism in the African context, why is it important in the issue of data privacy and protection legislation; what are the laws, if any, existing on data privacy and protection in Africa, are they women inclusive, if not, why; what are the measures put in place for the privacy and protection of women in Africa, and how can this be made possible. The paper aims to investigate the issue of data privacy and protection in Africa, the legal framework, and the protection or provision that it has for women if any. It further aims to research the importance and necessity of feminizing data privacy and protection, the effect of lack of it, the challenges or bottlenecks in attaining this feat and the possibilities of accessing data privacy and protection for African women. The paper also researches the emerging practices of data privacy and protection of women in other jurisprudences. It approaches the research through the methodology of review of papers, analysis of laws, and reports. It seeks to contribute to the existing literature in the field and is explorative in its suggestion. It suggests a draft of some clauses to make any data privacy and protection legislation women inclusive. It would be useful for policymaking, academic, and public enlightenment.

Keywords: feminism, women, law, data, Africa

Procedia PDF Downloads 200
24181 Evaluation of Practicality of On-Demand Bus Using Actual Taxi-Use Data through Exhaustive Simulations

Authors: Jun-ichi Ochiai, Itsuki Noda, Ryo Kanamori, Keiji Hirata, Hitoshi Matsubara, Hideyuki Nakashima

Abstract:

We conducted exhaustive simulations for data assimilation and evaluation of service quality for various setting in a new shared transportation system, called SAVS. Computational social simulation is a key technology to design recent social services like SAVS as new transportation service. One open issue in SAVS was to determine the service scale through the social simulation. Using our exhaustive simulation framework, OACIS, we did data-assimilation and evaluation of effects of SAVS based on actual tax-use data at Tajimi city, Japan. Finally, we get the conditions to realize the new service in a reasonable service quality.

Keywords: on-demand bus sytem, social simulation, data assimilation, exhaustive simulation

Procedia PDF Downloads 316