Search results for: geographical data visualization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24877

Search results for: geographical data visualization

24697 A Small Graphic Lie. The Photographic Quality of Pierre Bourdieu’s Correspondance Analysis

Authors: Lene Granzau Juel-Jacobsen

Abstract:

The problem of beautification is an obvious concern of photography, claiming reference to reality, but it also lies at the very heart of social theory. As we become accustomed to sophisticated visualizations of statistical data in pace with the development of software programs, we should not only be inclined to ask new types of research questions, but we also need to confront social theories based on such visualization techniques with new types of questions. Correspondence Analysis, GIS analysis, Social Network Analysis, and Perceptual Maps are current examples of visualization techniques popular within the social sciences and neighboring disciplines. This article discusses correspondence analysis, arguing that the graphic plot of correspondence analysis is to be interpreted much similarly to a photograph. It refers no more evidently or univocally to reality than a photograph, representing social life no more truthfully than a photograph documents. Pierre Bourdieu’s theoretical corpus, especially his theory of fields, relies heavily on correspondence analysis. While much attention has been directed towards critiquing the somewhat vague conceptualization of habitus, limited focus has been placed on the equally problematic concepts of social space and field. Based on a re-reading of the Distinction, the article argues that the concepts rely on ‘a small graphic lie’ very similar to a photograph. Like any other piece of art, as Bourdieu himself recognized, the graphic display is a politically and morally loaded representation technique. However, the correspondence analysis does not necessarily serve the purpose he intended. In fact, it tends towards the pitfalls he strove to overcome.

Keywords: datavisualization, correspondance analysis, bourdieu, Field, visual representation

Procedia PDF Downloads 37
24696 Determination of Phenolic Compounds in Apples Grown in Different Geographical Regions

Authors: Mindaugas Liaudanskas, Monika Tallat-Kelpsaite, Darius Kviklys, Jonas Viskelis, Pranas Viskelis, Norbertas Uselis, Juozas Lanauskas, Valdimaras Janulis

Abstract:

Apples are an important source of various biologically active compounds used for human health. Phenolic compounds detected in apples are natural antioxidants and have antimicrobial, anti-inflammatory, anticarcinogenic, and cardiovascular protective activity. The quantitative composition of phenolic compounds in apples may be affected by various factors. It is important to investigate it in order to provide the consumer with high-quality well-known composition apples and products made out of it. The objective of this study was to evaluate phenolic compounds quantitative composition in apple fruits grown in a different geographical region. In this study, biological replicates of apple cv. 'Ligol', grown in Lithuania, Latvia, Poland, and Estonia, were investigated. Three biological replicates were analyzed; one of each contained 10 apples. Samples of lyophilized apple fruits were extracted with 70% ethanol (v/v) for 20 min at 40∘C temperature using the ultrasonic bath. The ethanol extracts of apple fruits were analyzed by the high-performance liquid chromatography method. The study found that the geographical location of apple-trees had an impact on the composition of phenolic compounds in apples. The number of quercetin glycosides varied from 314.78±9.47 µg/g (Poland) to 648.17±5.61 µg/g (Estonia). The same trend was also observed with flavan-3-ols (from 829.56±47.17 µg/g to 2300.85±35.49 µg/g), phloridzin (from 55.29±1.7 µg/g to 208.78±0.35 µg/g), and chlorogenic acid (from 501.39±28.84 µg/g to 1704.35±22.65 µg/g). It was observed that the amount of investigated phenolic compounds tended to increase from apples grown in the southern location (Poland) (1701.02±75.38 µg/g) to apples grown northern location (Estonia) (4862.15±56.37 µg/g). Apples (cv. 'Ligol') grown in Estonia accumulated approx. 2.86 times higher amount of phenolic compounds than apples grown in Poland. Acknowledgment: This work was supported by a grant from the Research Council of Lithuania, project No. S-MIP-17-8.

Keywords: apples, cultivar 'Ligol', geographical regions, HPLC, phenolic compounds

Procedia PDF Downloads 153
24695 Fatty Acid Composition and Therapeutic Effects of Beebread

Authors: Sibel Silici

Abstract:

Palynological spectrum, proximate and fatty acids composition of eight beebread samples obtained from different geographical origins were determined. Beebread moisture contents varied between 11.4-15.9 %, ash 1.9-2.54 %, fat 5.9-11.5 %, and protein between 14.8-24.3 %. To our knowledge, this is the first study investigating fatty acids (FAs) composition of the selected monofloral beebreads. A total of thirty-seven FAs were identified. Of these (9Z, 12Z, 15Z)-octadeca-9, 12, 15-trienoic acid, (9Z, 12Z)-octadeca-9, 12-dienoic acid, hexadecanoic acid, (Z)-octadec-9-enoic acid, (Z)-icos-11-enoic acid and octadecanoic acid were the most abundant in all the samples. Cotton beebread contained the highest level of ω-3 FAs, 41.3 %. Unsaturated/saturated FAs ratios ranged between 1.38 and 2.39 indicating that beebread is a good source of unsaturated FAs. The pollen, proximate and FAs composition of beebread samples of different botanical and geographical origins varied significantly.

Keywords: bee bread, fatty acid composition, proximate composition, pollen analysis

Procedia PDF Downloads 230
24694 Still Pictures for Learning Foreign Language Sounds

Authors: Kaoru Tomita

Abstract:

This study explores how visual information helps us to learn foreign language pronunciation. Visual assistance and its effect for learning foreign language have been discussed widely. For example, simplified illustrations in textbooks are used for telling learners which part of the articulation organs are used for pronouncing sounds. Vowels are put into a chart that depicts a vowel space. Consonants are put into a table that contains two axes of place and manner of articulation. When comparing a still picture and a moving picture for visualizing learners’ pronunciation, it becomes clear that the former works better than the latter. The visualization of vowels was applied to class activities in which native and non-native speakers’ English was compared and the learners’ feedback was collected: the positions of six vowels did not scatter as much as they were expected to do. Specifically, two vowels were not discriminated and were arranged very close in the vowel space. It was surprising for the author to find that learners liked analyzing their own pronunciation by linking formant ones and twos on a sheet of paper with a pencil. Even a simple method works well if it leads learners to think about their pronunciation analytically.

Keywords: feedback, pronunciation, visualization, vowel

Procedia PDF Downloads 215
24693 Legal Judgment Prediction through Indictments via Data Visualization in Chinese

Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun

Abstract:

Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.

Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization

Procedia PDF Downloads 95
24692 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 91
24691 The Cartometric-Geographical Analysis of Ivane Javakhishvili 1922: The Map of the Republic of Georgia

Authors: Manana Kvetenadze, Dali Nikolaishvili

Abstract:

The study revealed the territorial changes of Georgia before the Soviet and Post-Soviet periods. This includes the estimation of the country's borders, its administrative-territorial arrangement change as well as the establishment of territorial losses. Georgia’s old and new borders marked on the map are of great interest. The new boundary shows the condition of 1922 year, following the Soviet period. Neither on this map nor in other works Ivane Javakhishvili talks about what he implies in the old borders, though it is evident that this is the Pre-Soviet boundary until 1921 – i.e., before the period when historical Tao, Zaqatala, Lore, Karaia represented the parts of Georgia. According to cartometric-geographical terms, the work presents detailed analysis of Georgia’s borders, along with this the comparison of research results has been carried out: 1) At the boundary line on Soviet topographic maps, the maps of 100,000; 50,000 and 25,000 scales are used; 2) According to Ivane Javakhishvili’s work ('The borders of Georgia in terms of historical and contemporary issues'). During that research, we used multi-disciplined methodology and software. We used Arc GIS for Georeferencing maps, and after that, we compare all post-Soviet Union maps, in order to determine how the borders have changed. During this work, we also use many historical data. The features of the spatial distribution of the territorial administrative units of Georgia, as well as the distribution of administrative-territorial units of the objects depicted on the map, have been established. The results obtained are presented in the forms of thematic maps and diagrams.

Keywords: border, GIS, georgia, historical cartography, old maps

Procedia PDF Downloads 212
24690 Machine Learning-Based Workflow for the Analysis of Project Portfolio

Authors: Jean Marie Tshimula, Atsushi Togashi

Abstract:

We develop a data-science approach for providing an interactive visualization and predictive models to find insights into the projects' historical data in order for stakeholders understand some unseen opportunities in the African market that might escape them behind the online project portfolio of the African Development Bank. This machine learning-based web application identifies the market trend of the fastest growing economies across the continent as well skyrocketing sectors which have a significant impact on the future of business in Africa. Owing to this, the approach is tailored to predict where the investment needs are the most required. Moreover, we create a corpus that includes the descriptions of over more than 1,200 projects that approximately cover 14 sectors designed for some of 53 African countries. Then, we sift out this large amount of semi-structured data for extracting tiny details susceptible to contain some directions to follow. In the light of the foregoing, we have applied the combination of Latent Dirichlet Allocation and Random Forests at the level of the analysis module of our methodology to highlight the most relevant topics that investors may focus on for investing in Africa.

Keywords: machine learning, topic modeling, natural language processing, big data

Procedia PDF Downloads 151
24689 Study on the Relationship between the Urban Geography and Urban Agglomeration to the Effects of Carbon Emissions

Authors: Peng-Shao Chen, Yen-Jong Chen

Abstract:

In recent years, global warming, the dramatic change in energy prices and the exhaustion of natural resources illustrated that energy-related topic cannot be ignored. Despite the relationship between the cities and CO₂ emissions has been extensively studied in recent years, little attention has been paid to differences in the geographical location of the city. However, the geographical climate has a great impact on lifestyle from city to city, such as the type of buildings, the major industry of the city, etc. Therefore, the paper instigates empirically the effects of kinds of urban factors and CO₂ emissions with consideration of the different geographic, climatic zones which cities are located. Using the regression model and a dataset of urban agglomeration in East Asia cities with over one million population, including 2005, 2010, and 2015 three years, the findings suggest that the impact of urban factors on CO₂ emissions vary with the latitude of the cities. Surprisingly, all kinds of urban factors, including the urban population, the share of GDP in service industry, per capita income, and others, have different level of impact on the cities locate in the tropical climate zone and temperate climate zone. The results of the study analyze the impact of different urban factors on CO₂ emissions in urban area with different geographical climate zones. These findings will be helpful for the formulation of relevant policies for urban planners and policy makers in different regions.

Keywords: carbon emissions, urban agglomeration, urban factor, urban geography

Procedia PDF Downloads 233
24688 “Octopub”: Geographical Sentiment Analysis Using Named Entity Recognition from Social Networks for Geo-Targeted Billboard Advertising

Authors: Oussama Hafferssas, Hiba Benyahia, Amina Madani, Nassima Zeriri

Abstract:

Although data nowadays has multiple forms; from text to images, and from audio to videos, yet text is still the most used one at a public level. At an academical and research level, and unlike other forms, text can be considered as the easiest form to process. Therefore, a brunch of Data Mining researches has been always under its shadow, called "Text Mining". Its concept is just like data mining’s, finding valuable patterns in data, from large collections and tremendous volumes of data, in this case: Text. Named entity recognition (NER) is one of Text Mining’s disciplines, it aims to extract and classify references such as proper names, locations, expressions of time and dates, organizations and more in a given text. Our approach "Octopub" does not aim to find new ways to improve named entity recognition process, rather than that it’s about finding a new, and yet smart way, to use NER in a way that we can extract sentiments of millions of people using Social Networks as a limitless information source, and Marketing for product promotion as the main domain of application.

Keywords: textmining, named entity recognition(NER), sentiment analysis, social media networks (SN, SMN), business intelligence(BI), marketing

Procedia PDF Downloads 558
24687 Road Traffic Noise Mapping for Riyadh City Using GIS and Lima

Authors: Khalid A. Alsaif, Mosaad A. Foda

Abstract:

The primary objective of this study is to develop the first round of road traffic noise maps for Riyadh City using Geographical Information Systems (GIS) and software LimA 7810 predictor. The road traffic data were measured or estimated as accurate as possible in order to obtain reliable noise maps. Meanwhile, the attributes of the roads and buildings are automatically exported from GIS. The simulation results at some chosen locations are validated by actual field measurements, which are obtained by a system that consists of a sound level meter, a GPS receiver and a database to manage the measured data. The results show that the average error between the predicted and measured noise levels is below 3.0 dB.

Keywords: noise pollution, road traffic noise, LimA predictor, GIS

Procedia PDF Downloads 369
24686 Geographic Information System Using Google Fusion Table Technology for the Delivery of Disease Data Information

Authors: I. Nyoman Mahayasa Adiputra

Abstract:

Data in the field of health can be useful for the purposes of data analysis, one example of health data is disease data. Disease data is usually in a geographical plot in accordance with the area. Where the data was collected, in the city of Denpasar, Bali. Disease data report is still published in tabular form, disease information has not been mapped in GIS form. In this research, disease information in Denpasar city will be digitized in the form of a geographic information system with the smallest administrative area in the form of district. Denpasar City consists of 4 districts of North Denpasar, East Denpasar, West Denpasar and South Denpasar. In this research, we use Google fusion table technology for map digitization process, where this technology can facilitate from the administrator and from the recipient information. From the administrator side of the input disease, data can be done easily and quickly. From the receiving end of the information, the resulting GIS application can be published in a website-based application so that it can be accessed anywhere and anytime. In general, the results obtained in this study, divided into two, namely: (1) Geolocation of Denpasar and all of Denpasar districts, the process of digitizing the map of Denpasar city produces a polygon geolocation of each - district of Denpasar city. These results can be utilized in subsequent GIS studies if you want to use the same administrative area. (2) Dengue fever mapping in 2014 and 2015. Disease data used in this study is dengue fever case data taken in 2014 and 2015. Data taken from the profile report Denpasar Health Department 2015 and 2016. This mapping can be useful for the analysis of the spread of dengue hemorrhagic fever in the city of Denpasar.

Keywords: geographic information system, Google fusion table technology, delivery of disease data information, Denpasar city

Procedia PDF Downloads 104
24685 An Overview of College English Writing Teaching Studies in China Between 2002 and 2022: Visualization Analysis Based on CiteSpace

Authors: Yang Yiting

Abstract:

This paper employs CiteSpace to conduct a visualiazation analysis of literature on college English writing teaching researches published in core journals from the CNKI database and CSSCI journals between 2002 and 2022. It aims to explore the characteristics of researches and future directions on college English writing teaching. The present study yielded the following major findings: the field primarily focuses on innovative writing teaching models and methods, the integration of traditional classroom teaching and information technology, and instructional strategies to enhance students' writing skills. The future research is anticipated to involve a hybrid writing teaching approach combining online and offline teaching methods, leveraging the "Internet+" digital platform, aiming to elevate students' writing proficiency. This paper also presents a prospective outlook for college English writing teaching research in China.

Keywords: citespace, college English, writing teaching, visualization analysis

Procedia PDF Downloads 30
24684 Application of Public Access Two-Dimensional Hydrodynamic and Distributed Hydrological Models for Flood Forecasting in Ungauged Basins

Authors: Ahmad Shayeq Azizi, Yuji Toda

Abstract:

In Afghanistan, floods are the most frequent and recurrent events among other natural disasters. On the other hand, lack of monitoring data is a severe problem, which increases the difficulty of making the appropriate flood countermeasures of flood forecasting. This study is carried out to simulate the flood inundation in Harirud River Basin by application of distributed hydrological model, Integrated Flood Analysis System (IFAS) and 2D hydrodynamic model, International River Interface Cooperative (iRIC) based on satellite rainfall combined with historical peak discharge and global accessed data. The results of the simulation can predict the inundation area, depth and velocity, and the hardware countermeasures such as the impact of levee installation can be discussed by using the present method. The methodology proposed in this study is suitable for the area where hydrological and geographical data including river survey data are poorly observed.

Keywords: distributed hydrological model, flood inundation, hydrodynamic model, ungauged basins

Procedia PDF Downloads 134
24683 Interactive Glare Visualization Model for an Architectural Space

Authors: Florina Dutt, Subhajit Das, Matthew Swartz

Abstract:

Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.

Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis

Procedia PDF Downloads 322
24682 Multi Cloud Storage Systems for Resource Constrained Mobile Devices: Comparison and Analysis

Authors: Rajeev Kumar Bedi, Jaswinder Singh, Sunil Kumar Gupta

Abstract:

Cloud storage is a model of online data storage where data is stored in virtualized pool of servers hosted by third parties (CSPs) and located in different geographical locations. Cloud storage revolutionized the way how users access their data online anywhere, anytime and using any device as a tablet, mobile, laptop, etc. A lot of issues as vendor lock-in, frequent service outage, data loss and performance related issues exist in single cloud storage systems. So to evade these issues, the concept of multi cloud storage introduced. There are a lot of multi cloud storage systems exists in the market for mobile devices. In this article, we are providing comparison of four multi cloud storage systems for mobile devices Otixo, Unclouded, Cloud Fuze, and Clouds and evaluate their performance on the basis of CPU usage, battery consumption, time consumption and data usage parameters on three mobile phones Nexus 5, Moto G and Nexus 7 tablet and using Wi-Fi network. Finally, open research challenges and future scope are discussed.

Keywords: cloud storage, multi cloud storage, vendor lock-in, mobile devices, mobile cloud computing

Procedia PDF Downloads 376
24681 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System

Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva

Abstract:

Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.

Keywords: energy production, meteorological data, irradiance decomposition, solar photovoltaic system

Procedia PDF Downloads 105
24680 The Use of Classifiers in Image Analysis of Oil Wells Profiling Process and the Automatic Identification of Events

Authors: Jaqueline Maria Ribeiro Vieira

Abstract:

Different strategies and tools are available at the oil and gas industry for detecting and analyzing tension and possible fractures in borehole walls. Most of these techniques are based on manual observation of the captured borehole images. While this strategy may be possible and convenient with small images and few data, it may become difficult and suitable to errors when big databases of images must be treated. While the patterns may differ among the image area, depending on many characteristics (drilling strategy, rock components, rock strength, etc.). Previously we developed and proposed a novel strategy capable of detecting patterns at borehole images that may point to regions that have tension and breakout characteristics, based on segmented images. In this work we propose the inclusion of data-mining classification strategies in order to create a knowledge database of the segmented curves. These classifiers allow that, after some time using and manually pointing parts of borehole images that correspond to tension regions and breakout areas, the system will indicate and suggest automatically new candidate regions, with higher accuracy. We suggest the use of different classifiers methods, in order to achieve different knowledge data set configurations.

Keywords: image segmentation, oil well visualization, classifiers, data-mining, visual computer

Procedia PDF Downloads 272
24679 Using the Geographical Information Systems Story Maps in the Planning and Implementation of the Integrated Development Plan at the City of Umhlathuze, South Africa

Authors: Sibonakaliso Shadrack Nhlabathi

Abstract:

In South Africa local governments which are charged with the provision of services and amenities, frequently, face challenges of public protests against what the public perceives to be poor services. Public protests are common, even though the Integrated Development Plan, a central public participation document, which informs local government planning and resources management, ought to be a reflection of the voices of the beneficiary communities. The Integrated Development Plan concept –which evolved from the international discourse on governance, planning, and urban management of the 1990s, and, which bears similarities to the UK’s approaches to urban management and planning– is a significant concept in the planning practice in South Africa. Against this backdrop of the spread of public protests and the supposedly public participation in IDP formulation, this study investigated the extent to which residents of the city of uMhlathuze municipality, South Africa, could use Geographical Information Systems (GIS) Story Maps to enhance public participation in the provision of services and amenities. To this effect, this study collected and analysed data obtained through interactive web maps or hard copy maps; this map data was accompanied by research participants’ attributes data. Research participants identified positive or negative service delivery areas. Positive places were the places which the residents represented as good infrastructural, and amenities areas and weak places were marked as poor amenities. Participants then located each of their identified strong or weak places as points on the GIS Story Maps or on hard copy maps of the city. The information which participants provided was subsequently analysed to produce maps of patterns of service provision. In this way, the study succeeded to identify places that needed attention regarding delivery of services and amenities. Thus, this study advanced service provision through GIS Story Maps.

Keywords: GIS, IPD, South Africa, story maps

Procedia PDF Downloads 98
24678 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 125
24677 Digital Geography and Geographic Information System in Schools: Towards a Hierarchical Geospatial Approach

Authors: Mary Fargher

Abstract:

This paper examines the opportunities of using a more hierarchical approach to geospatial enquiry in using GIS in school geography. A case is made that it is not just the lack of teacher technological knowledge that is stopping some teachers from using GIS in the classroom but that there is a gap in their understanding of how to link GIS use more specifically to the pedagogy of teaching geography with GIS. Using a hierarchical approach to geospatial enquiry as a theoretical framework, the analysis shows clearly how concepts of spatial distribution, interaction, relation, comparison, and temporal relationships can be used by teachers more explicitly to capitalise on the analytical power of GIS and to construct what can be interpreted as powerful geographical knowledge. An exemplar illustrating this approach on the topic of geo-hazards is then presented for critical analysis and discussion. Recommendations are then made for a model of progression for geography teacher education with GIS through hierarchical geospatial enquiry that takes into account beginner, intermediate, and more advanced users.

Keywords: digital geography, GIS, education, hierarchical geospatial enquiry, powerful geographical knowledge

Procedia PDF Downloads 118
24676 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 247
24675 Meanings and Concepts of Standardization in Systems Medicine

Authors: Imme Petersen, Wiebke Sick, Regine Kollek

Abstract:

In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.

Keywords: data, science and technology studies (STS), standardization, systems medicine

Procedia PDF Downloads 310
24674 Meta-Review of Scholarly Publications on Biosensors: A Bibliometric Study

Authors: Nasrine Olson

Abstract:

With over 70,000 scholarly publications on the topic of biosensors, an overview of the field has become a challenge. To facilitate, there are currently over 700 expert-reviews of publications on biosensors and related topics. This study focuses on these review papers in order to provide a Meta-Review of the area. This paper provides a statistical analysis and overview of biosensor-related review papers. Comprehensive searches are conducted in the Web of Science, and PubMed databases and the resulting empirical material are analyzed using bibliometric methods and tools. The study finds that the biosensor-related review papers can be categorized in five related subgroups, broadly denoted by (i) properties of materials and particles, (ii) analysis and indicators, (iii) diagnostics, (iv) pollutant and analytical devices, and (v) treatment/ application. For an easy and clear access to the findings visualization of clusters and networks of connections are presented. The study includes a temporal dimension and identifies the trends over the years with an emphasis on the most recent developments. This paper provides useful insights for those who wish to form a better understanding of the research trends in the area of biosensors.

Keywords: bibliometrics, biosensors, meta-review, statistical analysis, trends visualization

Procedia PDF Downloads 186
24673 Geospatial Network Analysis Using Particle Swarm Optimization

Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh

Abstract:

The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.

Keywords: particle swarm optimization, GIS, traffic data, outliers

Procedia PDF Downloads 448
24672 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization

Procedia PDF Downloads 245
24671 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company

Authors: Rahma Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS

Procedia PDF Downloads 172
24670 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: steganography, stego, LSB, crop

Procedia PDF Downloads 240
24669 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company

Authors: Rahma Saleh Hussein Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, CMMS

Procedia PDF Downloads 93
24668 Flow Visualization around a Rotationally Oscillating Cylinder

Authors: Cemre Polat, Mustafa Soyler, Bulent Yaniktepe, Coskun Ozalp

Abstract:

In this study, it was aimed to control the flow actively by giving an oscillating rotational motion to a vertically placed cylinder, and flow characteristics were determined. In the study, firstly, the flow structure around the flat cylinder was investigated with dye experiments, and then the cylinders with different oscillation angles (θ = 60°, θ = 120°, and θ = 180°) and different rotation speeds (15 rpm and 30 rpm) the flow structure around it was examined. Thus, the effectiveness of oscillation and rotation speed in flow control has been investigated. In the dye experiments, the dye/water mixture obtained by mixing Rhodamine 6G in powder form with water, which shines under laser light and allows detailed observation of the flow structure, was used. During the experiments, the dye was injected into the flow with the help of a thin needle at a distance that would not affect the flow from the front of the cylinder. In dye experiments, 100 frames per second were taken with a Canon brand EOS M50 (24MP) digital mirrorless camera at a resolution of 1280 * 720 pixels. Then, the images taken were analyzed, and the pictures representing the flow structure for each experiment were obtained. As a result of the study, it was observed that no separation points were formed at 180° swing angle at 15 rpm speed, 120° and 180° swing angle at 30 rpm, and the flow was controlled according to the fixed cylinder.

Keywords: active flow control, cylinder, flow visualization rotationally oscillating

Procedia PDF Downloads 142