Search results for: geographic data streams
24551 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework
Authors: Abbas Raza Ali
Abstract:
Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation
Procedia PDF Downloads 18024550 Food Strategies in the Mediterranean Basin, Possible for Food Safety and Security
Authors: Lorenza Sganzetta, Nunzia Borrelli
Abstract:
The research intends to reflect on the current mapping of the Food Strategies, on the reasons why in the planning objectives panorama, such sustainability priorities are located in those geographic areas and on the evolutions of these priorities of the Mediterranean planning dispositions. The whirling population growth that is affecting global cities is causing an enormous challenge to conventional resource-intensive food production and supply and the urgent need to face food safety, food security and sustainability concerns. Urban or Territorial Food Strategies can provide an interesting path for the development of this new agenda within the imperative principle of sustainability. In the specific, it is relevant to explore what ‘sustainability’ means within these policies. Most of these plans include actions related to four main components and interpretations of sustainability that are food security and safety, food equity, environmental sustainability itself and cultural identity and, at the designing phase, they differ slightly from each other according to the degree of approximation to one of these dimensions. Moving from these assumptions, the article would analyze some practices and policies representatives of different Food Strategies of the world and focus on the Mediterranean ones, on the problems and negative externalities from which they start, on the first interventions that are implementing and on their main objectives. We will mainly use qualitative data from primary and secondary collections. So far, an essential observation could have been made about the relationship between these sustainability dimensions and geography. In statistical terms, the US and Canadian policies tended to devote a large research space to health issues and access to food; those northern European showed a special attention to the environmental issues and the shortening of the chain; and finally the policies that, even in limited numbers, were being developed in the Mediterranean basin, were characterized by a strong territorial and cultural imprint and their major aim was to preserve local production and the contact between the productive land and the end consumer. Recently, though, Mediterranean food planning strategies are focusing more on health related and food accessibility issues and analyzing our diets not just as a matter of culture and territorial branding but as tools for reducing public health costs and accessibility to fresh food for everyone. The article would reflect then on how Food Safety, Food Security and Health are entering the new agenda of the Mediterranean Food Strategies. The research hypothesis suggests that the economic crisis that in the last years invested both producers and consumers had a significant impact on the nutrition habits and on the redefinition of food poverty, even in the fatherland of the healthy Mediterranean diet. This trend and other variables influenced the orientation and the objectives of the food strategies.Keywords: food security, food strategy, health, sustainability
Procedia PDF Downloads 22724549 Programming with Grammars
Authors: Peter M. Maurer Maurer
Abstract:
DGL is a context free grammar-based tool for generating random data. Many types of simulator input data require some computation to be placed in the proper format. For example, it might be necessary to generate ordered triples in which the third element is the sum of the first two elements, or it might be necessary to generate random numbers in some sorted order. Although DGL is universal in computational power, generating these types of data is extremely difficult. To overcome this problem, we have enhanced DGL to include features that permit direct computation within the structure of a context free grammar. The features have been implemented as special types of productions, preserving the context free flavor of DGL specifications.Keywords: DGL, Enhanced Context Free Grammars, Programming Constructs, Random Data Generation
Procedia PDF Downloads 15124548 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses
Authors: Ouzayr Rabhi, Ibtissam Arrassen
Abstract:
To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML
Procedia PDF Downloads 16424547 Cost-Effective Materials for Hydrocarbons Recovery from Produced Water
Authors: Fahd I. Alghunaimi, Hind S. Dossary, Norah W. Aljuryyed, Tawfik A. Saleh
Abstract:
Produced water (PW) is one of the largest by-volume waste streams and one of the most challenging effluents in the oil and gas industry. This is due to the variation of contaminants that make up PW. Severalmaterialshavebeen developed, studied, and implemented to remove hydrocarbonsfrom PW. Adsorption is one of the most effective ways ofremoving oil fromPW. In this work, three new and cost-effective hydrophobic adsorbentmaterials based on 9-octadecenoic acid grafted graphene (POG) were synthesized for oil/water separation. Graphene derived from graphite was modified with 9-octadecenoic acid to yield 9-octadecenoic acid grafted graphene (OG). The newsynthesized materials which called POG25, POG50, and POG75 were characterized by using N₂-physisorption (BET) and Fourier transform infrared (FTIR). The BET surface area of POG75 was the highest with 288 m²/g, whereas POG50 was 225 m²/g and POG25 was lowest 79 m²/g. These three materials were also evaluated for their oil-water separation efficiency using a model mixture, whichdemonstrated that POG-75 has the highest oil removal efficiency and the faster rate of the adsorption (Figure-1). POG75 was regenerated, and its performance was verified again with a little reduced adsorption rate compared to the fresh material. The mixtures that used in the performance test were prepared by mixing nonpolar organic liquids such as heptane, dodecane, or hexadecane into the colored water. In general, the new materials showed fast uptake of the certain quantity of the oildue to the high hydrophobicity nature of the materials, which repel water as confirmed by the contact angle of approximately 150˚. Besides that, novel superhydrophobic material was also synthesized by introducing hydrophobic branches of laurate on the surface of the stainless steel mesh (SSM). This novel mesh could help to hold the novel adsorbent materials in a column to remove oil from PW. Both BOG-75 and the novel mesh have the potential to remove oil contaminants from produced water, which will help to provide an opportunity to recover useful components, in addition, to reduce the environmental impact and reuse produced water in several applications such as fracturing.Keywords: graphite to graphene, oleophilic, produced water, separation
Procedia PDF Downloads 12424546 Secured Embedding of Patient’s Confidential Data in Electrocardiogram Using Chaotic Maps
Authors: Butta Singh
Abstract:
This paper presents a chaotic map based approach for secured embedding of patient’s confidential data in electrocardiogram (ECG) signal. The chaotic map generates predefined locations through the use of selective control parameters. The sample value difference method effectually hides the confidential data in ECG sample pairs at these predefined locations. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through various statistical and clinical performance measures. Statistical metrics comprise of Percentage Root Mean Square Difference (PRD) and Peak Signal to Noise Ratio (PSNR). Further, a comparative analysis between proposed method and existing approaches was also performed. The results clearly demonstrated the superiority of proposed method.Keywords: chaotic maps, ECG steganography, data embedding, electrocardiogram
Procedia PDF Downloads 20124545 The Relationships among Self-Efficacy, Critical Thinking and Communication Skills Ability in Oncology Nurses for Cancer Immunotherapy in Taiwan
Authors: Yun-Hsiang Lee
Abstract:
Cancer is the main cause of death worldwide. With advances in medical technology, immunotherapy, which is a newly developed advanced treatment, is currently a crucial cancer treatment option. For better quality cancer care, the ability to communicate and critical thinking plays a central role in clinical oncology settings. However, few studies have explored the impact of communication skills on immunotherapy-related issues and their related factors. This study was to (i) explore the current status of communication skill ability for immunotherapy-related issues, self-efficacy for immunotherapy-related care, and critical thinking ability; and (ii) identify factors related to communication skill ability. This is a cross-sectional study. Oncology nurses were recruited from the Taiwan Oncology Nursing Society, in which nurses came from different hospitals distributed across four major geographic regions (North, Center, South, East) of Taiwan. A total of 123 oncology nurses participated in this study. A set of questionnaires were used for collecting data. Communication skill ability for immunotherapy issues, self-efficacy for immunotherapy-related care, critical thinking ability, and background information were assessed in this survey. Independent T-test and one-way ANOVA were used to examine different levels of communication skill ability based on nurses having done oncology courses (yes vs. no) and education years (< 1 year, 1-3 years, and > 3 years), respectively. Spearman correlation was conducted to understand the relationships between communication skill ability and other variables. Among the 123 oncology nurses in the current study, the majority of them were female (98.4%), and most of them were employed at a hospital in the North (46.8%) of Taiwan. Most of them possessed a university degree (78.9%) and had at least 3 years of prior work experience (71.7%). Forty-three of the oncology nurses indicated in the survey that they had not received oncology nurses-related training. Those oncology nurses reported moderate to high levels of communication skill ability for immunotherapy issues (mean=4.24, SD=0.7, range 1-5). Nurses reported moderate levels of self-efficacy for immunotherapy-related care (mean=5.20, SD=1.98, range 0-10) and also had high levels of critical thinking ability (mean=4.76, SD=0.60, range 1-6). Oncology nurses who had received oncology training courses had significantly better communication skill ability than those who had not received oncology training. Oncology nurses who had higher work experience (1-3 years, or > 3 years) had significantly higher levels of communication skill ability for immunotherapy-related issues than those with lower work experience (<1 year). When those nurses reported better communication skill ability, they also had significantly better self-efficacy (r=.42, p<.01) and better critical thinking ability (r=.47, p<.01). Taken altogether, courses designed to improve communication skill ability for immunotherapy-related issues can make a significant impact in clinical settings. Communication skill ability for oncology nurses is the major factor associated with self-efficacy and critical thinking, especially for those with lower work experience (< 1 year).Keywords: communication skills, critical thinking, immunotherapy, oncology nurses, self-efficacy
Procedia PDF Downloads 11224544 Detection Efficient Enterprises via Data Envelopment Analysis
Authors: S. Turkan
Abstract:
In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.Keywords: data envelopment analysis, super efficiency, logistic regression, financial ratios
Procedia PDF Downloads 33224543 Intelligent Process Data Mining for Monitoring for Fault-Free Operation of Industrial Processes
Authors: Hyun-Woo Cho
Abstract:
The real-time fault monitoring and diagnosis of large scale production processes is helpful and necessary in order to operate industrial process safely and efficiently producing good final product quality. Unusual and abnormal events of the process may have a serious impact on the process such as malfunctions or breakdowns. This work try to utilize process measurement data obtained in an on-line basis for the safe and some fault-free operation of industrial processes. To this end, this work evaluated the proposed intelligent process data monitoring framework based on a simulation process. The monitoring scheme extracts the fault pattern in the reduced space for the reliable data representation. Moreover, this work shows the results of using linear and nonlinear techniques for the monitoring purpose. It has shown that the nonlinear technique produced more reliable monitoring results and outperforms linear methods. The adoption of the qualitative monitoring model helps to reduce the sensitivity of the fault pattern to noise.Keywords: process data, data mining, process operation, real-time monitoring
Procedia PDF Downloads 64524542 Statistically Accurate Synthetic Data Generation for Enhanced Traffic Predictive Modeling Using Generative Adversarial Networks and Long Short-Term Memory
Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad
Abstract:
Effective traffic management and infrastructure planning are crucial for the development of smart cities and intelligent transportation systems. This study addresses the challenge of data scarcity by generating realistic synthetic traffic data using the PeMS-Bay dataset, improving the accuracy and reliability of predictive modeling. Advanced synthetic data generation techniques, including TimeGAN, GaussianCopula, and PAR Synthesizer, are employed to produce synthetic data that replicates the statistical and structural characteristics of real-world traffic. Future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is planned to capture both spatial and temporal correlations, further improving data quality and realism. The performance of each synthetic data generation model is evaluated against real-world data to identify the best models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are utilized to model and predict complex temporal dependencies within traffic patterns. This comprehensive approach aims to pinpoint areas with low vehicle counts, uncover underlying traffic issues, and inform targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study supports data-driven decision-making that enhances urban mobility, safety, and the overall efficiency of city planning initiatives.Keywords: GAN, long short-term memory, synthetic data generation, traffic management
Procedia PDF Downloads 3224541 A Machine Learning Approach for the Leakage Classification in the Hydraulic Final Test
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
The widespread use of machine learning applications in production is significantly accelerated by improved computing power and increasing data availability. Predictive quality enables the assurance of product quality by using machine learning models as a basis for decisions on test results. The use of real Bosch production data based on geometric gauge blocks from machining, mating data from assembly and hydraulic measurement data from final testing of directional valves is a promising approach to classifying the quality characteristics of workpieces.Keywords: machine learning, classification, predictive quality, hydraulics, supervised learning
Procedia PDF Downloads 21724540 Analysis of Cyber Activities of Potential Business Customers Using Neo4j Graph Databases
Authors: Suglo Tohari Luri
Abstract:
Data analysis is an important aspect of business performance. With the application of artificial intelligence within databases, selecting a suitable database engine for an application design is also very crucial for business data analysis. The application of business intelligence (BI) software into some relational databases such as Neo4j has proved highly effective in terms of customer data analysis. Yet what remains of great concern is the fact that not all business organizations have the neo4j business intelligence software applications to implement for customer data analysis. Further, those with the BI software lack personnel with the requisite expertise to use it effectively with the neo4j database. The purpose of this research is to demonstrate how the Neo4j program code alone can be applied for the analysis of e-commerce website customer visits. As the neo4j database engine is optimized for handling and managing data relationships with the capability of building high performance and scalable systems to handle connected data nodes, it will ensure that business owners who advertise their products at websites using neo4j as a database are able to determine the number of visitors so as to know which products are visited at routine intervals for the necessary decision making. It will also help in knowing the best customer segments in relation to specific goods so as to place more emphasis on their advertisement on the said websites.Keywords: data, engine, intelligence, customer, neo4j, database
Procedia PDF Downloads 19624539 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 52224538 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models
Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales
Abstract:
The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.Keywords: concrete bridges, deterioration, Markov chains, probability matrix
Procedia PDF Downloads 33824537 Validation of Visibility Data from Road Weather Information Systems by Comparing Three Data Resources: Case Study in Ohio
Authors: Fan Ye
Abstract:
Adverse weather conditions, particularly those with low visibility, are critical to the driving tasks. However, the direct relationship between visibility distances and traffic flow/roadway safety is uncertain due to the limitation of visibility data availability. The recent growth of deployment of Road Weather Information Systems (RWIS) makes segment-specific visibility information available which can be integrated with other Intelligent Transportation System, such as automated warning system and variable speed limit, to improve mobility and safety. Before applying the RWIS visibility measurements in traffic study and operations, it is critical to validate the data. Therefore, an attempt was made in the paper to examine the validity and viability of RWIS visibility data by comparing visibility measurements among RWIS, airport weather stations, and weather information recorded by police in crash reports, based on Ohio data. The results indicated that RWIS visibility measurements were significantly different from airport visibility data in Ohio, but no conclusion regarding the reliability of RWIS visibility could be drawn in the consideration of no verified ground truth in the comparisons. It was suggested that more objective methods are needed to validate the RWIS visibility measurements, such as continuous in-field measurements associated with various weather events using calibrated visibility sensors.Keywords: RWIS, visibility distance, low visibility, adverse weather
Procedia PDF Downloads 25424536 Design and Simulation of All Optical Fiber to the Home Network
Authors: Rahul Malhotra
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 56124535 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm
Authors: Vahid Bayrami Rad
Abstract:
In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability
Procedia PDF Downloads 7224534 Wage Differentiation Patterns of Households Revisited for Turkey in Same Industry Employment: A Pseudo-Panel Approach
Authors: Yasin Kutuk, Bengi Yanik Ilhan
Abstract:
Previous studies investigate the wage differentiations among regions in Turkey between couples who work in the same industry and those who work in different industries by using the models that is appropriate for cross sectional data. However, since there is no available panel data for this investigation in Turkey, pseudo panels using repeated cross-section data sets of the Household Labor Force Surveys 2004-2014 are employed in order to open a new way to examine wage differentiation patterns. For this purpose, household heads are separated into groups with respect to their household composition. These groups’ membership is assumed to be fixed over time such as age groups, education, gender, and NUTS1 (12 regions) Level. The average behavior of them can be tracked overtime same as in the panel data. Estimates using the pseudo panel data would be consistent with the estimates using genuine panel data on individuals if samples are representative of the population which has fixed composition, characteristics. With controlling the socioeconomic factors, wage differentiation of household income is affected by social, cultural and economic changes after global economic crisis emerged in US. It is also revealed whether wage differentiation is changing among the birth cohorts.Keywords: wage income, same industry, pseudo panel, panel data econometrics
Procedia PDF Downloads 40024533 Secure Cryptographic Operations on SIM Card for Mobile Financial Services
Authors: Kerem Ok, Serafettin Senturk, Serdar Aktas, Cem Cevikbas
Abstract:
Mobile technology is very popular nowadays and it provides a digital world where users can experience many value-added services. Service Providers are also eager to offer diverse value-added services to users such as digital identity, mobile financial services and so on. In this context, the security of data storage in smartphones and the security of communication between the smartphone and service provider are critical for the success of these services. In order to provide the required security functions, the SIM card is one acceptable alternative. Since SIM cards include a Secure Element, they are able to store sensitive data, create cryptographically secure keys, encrypt and decrypt data. In this paper, we design and implement a SIM and a smartphone framework that uses a SIM card for secure key generation, key storage, data encryption, data decryption and digital signing for mobile financial services. Our frameworks show that the SIM card can be used as a controlled Secure Element to provide required security functions for popular e-services such as mobile financial services.Keywords: SIM card, mobile financial services, cryptography, secure data storage
Procedia PDF Downloads 31624532 Synthetic Data-Driven Prediction Using GANs and LSTMs for Smart Traffic Management
Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad
Abstract:
Smart cities and intelligent transportation systems rely heavily on effective traffic management and infrastructure planning. This research tackles the data scarcity challenge by generating realistically synthetic traffic data from the PeMS-Bay dataset, enhancing predictive modeling accuracy and reliability. Advanced techniques like TimeGAN and GaussianCopula are utilized to create synthetic data that mimics the statistical and structural characteristics of real-world traffic. The future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is anticipated to capture both spatial and temporal correlations, further improving data quality and realism. Each synthetic data generation model's performance is evaluated against real-world data to identify the most effective models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are employed to model and predict complex temporal dependencies within traffic patterns. This holistic approach aims to identify areas with low vehicle counts, reveal underlying traffic issues, and guide targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study facilitates data-driven decision-making that improves urban mobility, safety, and the overall efficiency of city planning initiatives.Keywords: GAN, long short-term memory (LSTM), synthetic data generation, traffic management
Procedia PDF Downloads 1924531 Study on the Relationship between the Urban Geography and Urban Agglomeration to the Effects of Carbon Emissions
Authors: Peng-Shao Chen, Yen-Jong Chen
Abstract:
In recent years, global warming, the dramatic change in energy prices and the exhaustion of natural resources illustrated that energy-related topic cannot be ignored. Despite the relationship between the cities and CO₂ emissions has been extensively studied in recent years, little attention has been paid to differences in the geographical location of the city. However, the geographical climate has a great impact on lifestyle from city to city, such as the type of buildings, the major industry of the city, etc. Therefore, the paper instigates empirically the effects of kinds of urban factors and CO₂ emissions with consideration of the different geographic, climatic zones which cities are located. Using the regression model and a dataset of urban agglomeration in East Asia cities with over one million population, including 2005, 2010, and 2015 three years, the findings suggest that the impact of urban factors on CO₂ emissions vary with the latitude of the cities. Surprisingly, all kinds of urban factors, including the urban population, the share of GDP in service industry, per capita income, and others, have different level of impact on the cities locate in the tropical climate zone and temperate climate zone. The results of the study analyze the impact of different urban factors on CO₂ emissions in urban area with different geographical climate zones. These findings will be helpful for the formulation of relevant policies for urban planners and policy makers in different regions.Keywords: carbon emissions, urban agglomeration, urban factor, urban geography
Procedia PDF Downloads 26824530 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection
Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada
Abstract:
With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.Keywords: machine learning, imbalanced data, data mining, big data
Procedia PDF Downloads 13424529 Automatic Detection of Traffic Stop Locations Using GPS Data
Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell
Abstract:
Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data
Procedia PDF Downloads 28024528 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data
Authors: Gayathri Nagarajan, L. D. Dhinesh Babu
Abstract:
Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform
Procedia PDF Downloads 24424527 Identification of Vulnerable Zone Due to Cyclone-Induced Storm Surge in the Exposed Coast of Bangladesh
Authors: Mohiuddin Sakib, Fatin Nihal, Rabeya Akter, Anisul Haque, Munsur Rahman, Wasif-E-Elahi
Abstract:
Surge generating cyclones are one of the deadliest natural disasters that threaten the life of coastal environment and communities worldwide. Due to the geographic location, ‘low lying alluvial plain, geomorphologic characteristics and 710 kilometers exposed coastline, Bangladesh is considered as one of the greatest vulnerable country for storm surge flooding. Bay of Bengal is possessing the highest potential of creating storm surge inundation to the coastal areas. Bangladesh is the most exposed country to tropical cyclone with an average of four cyclone striking every years. Frequent cyclone landfall made the country one of the worst sufferer within the world for cyclone induced storm surge flooding and casualties. During the years from 1797 to 2009 Bangladesh has been hit by 63 severe cyclones with strengths of different magnitudes. Though detailed studies were done focusing on the specific cyclone like Sidr or Aila, no study was conducted where vulnerable areas of exposed coast were identified based on the strength of cyclones. This study classifies the vulnerable areas of the exposed coast based on storm surge inundation depth and area due to cyclones of varying strengths. Classification of the exposed coast based on hazard induced cyclonic vulnerability will help the decision makers to take appropriate policies for reducing damage and loss.Keywords: cyclone, landfall, storm surge, exposed coastline, vulnerability
Procedia PDF Downloads 40624526 Geospatial Land Suitability Modeling for Biofuel Crop Using AHP
Authors: Naruemon Phongaksorn
Abstract:
The biofuel consumption has increased significantly over the decade resulting in the increasing request on agricultural land for biofuel feedstocks. However, the biofuel feedstocks are already stressed of having low productivity owing to inappropriate agricultural practices without considering suitability of crop land. This research evaluates the land suitability using GIS-integrated Analytic Hierarchy Processing (AHP) of biofuel crops: cassava, at Chachoengsao province, in Thailand. AHP method that has been widely accepted for land use planning. The objective of this study is compared between AHP method and the most limiting group of land characteristics method (classical approach). The reliable results of the land evaluation were tested against the crop performance assessed by the field investigation in 2015. In addition to the socio-economic land suitability, the expected availability of raw materials for biofuel production to meet the local biofuel demand, are also estimated. The results showed that the AHP could classify and map the physical land suitability with 10% higher overall accuracy than the classical approach. The Chachoengsao province showed high and moderate socio-economic land suitability for cassava. Conditions in the Chachoengsao province were also favorable for cassava plantation, as the expected raw material needed to support ethanol production matched that of ethanol plant capacity of this province. The GIS integrated AHP for biofuel crops land suitability evaluation appears to be a practical way of sustainably meeting biofuel production demand.Keywords: Analytic Hierarchy Processing (AHP), Cassava, Geographic Information Systems, Land suitability
Procedia PDF Downloads 21024525 Analysis of Sediment Distribution around Karang Sela Coral Reef Using Multibeam Backscatter
Authors: Razak Zakariya, Fazliana Mustajap, Lenny Sharinee Sakai
Abstract:
A sediment map is quite important in the marine environment. The sediment itself contains thousands of information that can be used for other research. This study was conducted by using a multibeam echo sounder Reson T20 on 15 August 2020 at the Karang Sela (coral reef area) at Pulau Bidong. The study aims to identify the sediment type around the coral reef by using bathymetry and backscatter data. The sediment in the study area was collected as ground truthing data to verify the classification of the seabed. A dry sieving method was used to analyze the sediment sample by using a sieve shaker. PDS 2000 software was used for data acquisition, and Qimera QPS version 2.4.5 was used for processing the bathymetry data. Meanwhile, FMGT QPS version 7.10 processes the backscatter data. Then, backscatter data were analyzed by using the maximum likelihood classification tool in ArcGIS version 10.8 software. The result identified three types of sediments around the coral which were very coarse sand, coarse sand, and medium sand.Keywords: sediment type, MBES echo sounder, backscatter, ArcGIS
Procedia PDF Downloads 9024524 Laboratory Scale Production of Bio-Based Chemicals from Industrial Waste Feedstock in South Africa
Authors: P. Mandree, S. O. Ramchuran, F. O'Brien, L. Sethunya, S. Khumalo
Abstract:
South Africa is identified as one of the five emerging waste management markets, globally. The waste sector in South Africa influences the areas of energy, water and food at an economic and social level. Recently, South African industries have focused on waste valorization and diversification of the current product offerings in an attempt to reduce industrial waste, target a zero waste-to-landfill initiative and recover energy. South Africa has a number of waste streams including industrial and agricultural biomass, municipal waste and marine waste. Large volumes of agricultural and forestry residues, in particular, are generated which provides significant opportunity for production of bio-based fuels and chemicals. This could directly impact development of a rural economy. One of the largest agricultural industries is the sugar industry, which contributes significantly to the country’s economy and job creation. However, the sugar industry is facing challenges due to fluctuations in sugar prices, increasing competition with low-cost global sugar producers, increasing energy and agricultural input costs, lower consumption and aging facilities. This study is aimed at technology development for the production of various bio-based chemicals using feedstock from the sugar refining process. Various indigenous bacteria and yeast species were assessed for the potential to produce platform chemicals in flask studies and at 30 L fermentation scale. Quantitative analysis of targeted bio-based chemicals was performed using either gas chromatography or high pressure liquid chromatography to assess production yields and techno-economics in order to compare performance to current commercial benchmark processes. The study also creates a decision platform for the research direction that is required for strain development using Industrial Synthetic Biology.Keywords: bio-based chemicals, biorefinery, industrial synthetic biology, waste valorization
Procedia PDF Downloads 12324523 The Belt and Road Initiative in a Spiderweb of Conflicting Great Power Interests: A Geopolitical Analysis
Authors: Csaba Barnabas Horvath
Abstract:
The Belt and Road initiative of China is one that can change the face of Eurasia as we know it. Instead of four major, densely populated subcontinents defined by Mackinder (East Asia, Europe, the Indian Subcontinent, and the Middle East) isolated from each other by vast, sparsely populated and underdeveloped regions, it can at last start to function as a geographic whole, with a sophisticated infrastructure linking its different parts to each other. This initiative, however, happens not in a geopolitical vacuum, but in a space of conflicting great power interests. In Central Asia, the influence of China and Russia are in a setting of competition, where despite the cooperation between the two powers to a great degree, issues causing mutual mistrust emerge repeatedly. In Afghanistan, besides western military presence, even India’s efforts can be added to the picture. In Southeast Asia, a key region regarding the maritime Silk Road, India’s Act East policy meets with China’s Belt and Road, not always in consensus, not to mention US and Japanese interests in the region. The presentation aims to take an overview on how conflicting great power interests are likely to influence the outcome of the Belt and Road initiative. The findings show, that overall success of the Belt and Road Initiative may not be as smooth, as hoped by China, but at the same time, in a limited number of strategically important countries (such as Pakistan, Laos, and Cambodia), this setting is actually a factor favoring China, providing at least a selected number of reliable corridors, where the initiative is actually likely to be successful.Keywords: belt and road initiative, geostrategic corridors, geopolitics, great power rivalry
Procedia PDF Downloads 22524522 A Named Data Networking Stack for Contiki-NG-OS
Authors: Sedat Bilgili, Alper K. Demir
Abstract:
The current Internet has become the dominant use with continuing growth in the home, medical, health, smart cities and industrial automation applications. Internet of Things (IoT) is an emerging technology to enable such applications in our lives. Moreover, Named Data Networking (NDN) is also emerging as a Future Internet architecture where it fits the communication needs of IoT networks. The aim of this study is to provide an NDN protocol stack implementation running on the Contiki operating system (OS). Contiki OS is an OS that is developed for constrained IoT devices. In this study, an NDN protocol stack that can work on top of IEEE 802.15.4 link and physical layers have been developed and presented.Keywords: internet of things (IoT), named-data, named data networking (NDN), operating system
Procedia PDF Downloads 176