Search results for: temporal data
24812 Geotechnical Distress Evaluation of a Damaged Structure
Authors: Zulfiqar Ali, Umar Saleem, Muhammad Junaid, Rizwan Tahir
Abstract:
Gulzar Mahal is a heritage site located in the city of Bahawalpur, Pakistan. The site is under a process of degradation, as cracks are appearing on the walls, roofs, and floor around the building due to differential settlement. To preserve the integrity of the structure, a geotechnical distress evaluation was carried out to evaluate the causal factors and recommend remediation measures. The research involved the characterization of the problematic soil and analysis of the observed distress with respect to the geotechnical properties. Both conventional lab and field tests were used in conjunction with the unconventional techniques like; Electrical Resistivity Tomography (ERT) and FEA. The temporal, geophysical and geotechnical evaluations have concluded that the foundation soil over the past was subjected to variations in the land use, poor drainage patterns, overloading and fluctuations in groundwater table all contributing to the differential settlements manifesting in the form of the visible shear crack across the length and breadth of the building.Keywords: differential settlement, distress evaluation, finite element analysis, Gulzar Mahal
Procedia PDF Downloads 12724811 Stream Channel Changes in Balingara River, Sulawesi Tengah
Authors: Muhardiyan Erawan, Zaenal Mutaqin
Abstract:
Balingara River is one of the rivers with the type Gravel-Bed in Indonesia. Gravel-Bed Rivers easily deformed in a relatively short time due to several variables, that are climate (rainfall), river discharge, topography, rock types, and land cover. To determine stream channel changes in Balingara River used Landsat 7 and 8 and analyzed planimetric or two dimensions. Parameters to determine changes in the stream channel are sinuosity ratio, Brice Index, the extent of erosion and deposition. Changes in stream channel associated with changes in land cover then analyze with a descriptive analysis of spatial and temporal. The location of a stream channel has a low gradient in the upstream, and middle watershed with the type of rock in the form of gravel is more easily changed than other locations. Changes in the area of erosion and deposition influence the land cover changes.Keywords: Brice Index, erosion, deposition, gravel-bed, land cover change, sinuosity ratio, stream channel change
Procedia PDF Downloads 32824810 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 13924809 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm
Procedia PDF Downloads 14224808 Canopy Temperature Acquired from Daytime and Nighttime Aerial Data as an Indicator of Trees’ Health Status
Authors: Agata Zakrzewska, Dominik Kopeć, Adrian Ochtyra
Abstract:
The growing number of new cameras, sensors, and research methods allow for a broader application of thermal data in remote sensing vegetation studies. The aim of this research was to check whether it is possible to use thermal infrared data with a spectral range (3.6-4.9 μm) obtained during the day and the night to assess the health condition of selected species of deciduous trees in an urban environment. For this purpose, research was carried out in the city center of Warsaw (Poland) in 2020. During the airborne data acquisition, thermal data, laser scanning, and orthophoto map images were collected. Synchronously with airborne data, ground reference data were obtained for 617 studied species (Acer platanoides, Acer pseudoplatanus, Aesculus hippocastanum, Tilia cordata, and Tilia × euchlora) in different health condition states. The results were as follows: (i) healthy trees are cooler than trees in poor condition and dying both in the daytime and nighttime data; (ii) the difference in the canopy temperatures between healthy and dying trees was 1.06oC of mean value on the nighttime data and 3.28oC of mean value on the daytime data; (iii) condition classes significantly differentiate on both daytime and nighttime thermal data, but only on daytime data all condition classes differed statistically significantly from each other. In conclusion, the aerial thermal data can be considered as an alternative to hyperspectral data, a method of assessing the health condition of trees in an urban environment. Especially data obtained during the day, which can differentiate condition classes better than data obtained at night. The method based on thermal infrared and laser scanning data fusion could be a quick and efficient solution for identifying trees in poor health that should be visually checked in the field.Keywords: middle wave infrared, thermal imagery, tree discoloration, urban trees
Procedia PDF Downloads 11524807 A 3D Numerical Environmental Modeling Approach For Assessing Transport of Spilled Oil in Porous Beach Conditions under a Meso-Scale Tank Design
Authors: J. X. Dong, C. J. An, Z. Chen, E. H. Owens, M. C. Boufadel, E. Taylor, K. Lee
Abstract:
Shorelines are vulnerable to significant environmental impacts from oil spills. Stranded oil can cause potential short- to long-term detrimental effects along beaches that include injuries to the ecosystem, socio-economic and cultural resources. In this study, a three-dimensional (3D) numerical modeling approach is developed to evaluate the fate and transport of spilled oil for hypothetical oiled shoreline cases under various combinations of beach geomorphology and environmental conditions. The developed model estimates the spatial and temporal distribution of spilled oil for the various test conditions, using the finite volume method and considering the physical transport (dispersion and advection), sinks, and sorption processes. The model includes a user-friendly interface for data input on variables such as beach properties, environmental conditions, and physical-chemical properties of spilled oil. An experimental mesoscale tank design was used to test the developed model for dissolved petroleum hydrocarbon within shorelines. The simulated results for effects of different sediment substrates, oil types, and shoreline features for the transport of spilled oil are comparable to those obtained with a commercially available model. Results show that the properties of substrates and the oil removal by shoreline effects have significant impacts on oil transport in the beach area. Sensitivity analysis, through the application of the one-step-at-a-time method (OAT), for the 3D model identified hydraulic conductivity as the most sensitive parameter. The 3D numerical model allows users to examine the behavior of oil on and within beaches, assess potential environmental impacts, and provide technical support for decisions related to shoreline clean-up operations.Keywords: dissolved petroleum hydrocarbons, environmental multimedia model, finite volume method, sensitivity analysis, total petroleum hydrocarbons
Procedia PDF Downloads 21724806 Hierarchical Clustering Algorithms in Data Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering.Keywords: clustering, unsupervised learning, algorithms, hierarchical
Procedia PDF Downloads 88524805 End to End Monitoring in Oracle Fusion Middleware for Data Verification
Authors: Syed Kashif Ali, Usman Javaid, Abdullah Chohan
Abstract:
In large enterprises multiple departments use different sort of information systems and databases according to their needs. These systems are independent and heterogeneous in nature and sharing information/data between these systems is not an easy task. The usage of middleware technologies have made data sharing between systems very easy. However, monitoring the exchange of data/information for verification purposes between target and source systems is often complex or impossible for maintenance department due to security/access privileges on target and source systems. In this paper, we are intended to present our experience of an end to end data monitoring approach at middle ware level implemented in Oracle BPEL for data verification without any help of monitoring tool.Keywords: service level agreement, SOA, BPEL, oracle fusion middleware, web service monitoring
Procedia PDF Downloads 48024804 Dissimilarity Measure for General Histogram Data and Its Application to Hierarchical Clustering
Authors: K. Umbleja, M. Ichino
Abstract:
Symbolic data mining has been developed to analyze data in very large datasets. It is also useful in cases when entry specific details should remain hidden. Symbolic data mining is quickly gaining popularity as datasets in need of analyzing are becoming ever larger. One type of such symbolic data is a histogram, which enables to save huge amounts of information into a single variable with high-level of granularity. Other types of symbolic data can also be described in histograms, therefore making histogram a very important and general symbolic data type - a method developed for histograms - can also be applied to other types of symbolic data. Due to its complex structure, analyzing histograms is complicated. This paper proposes a method, which allows to compare two histogram-valued variables and therefore find a dissimilarity between two histograms. Proposed method uses the Ichino-Yaguchi dissimilarity measure for mixed feature-type data analysis as a base and develops a dissimilarity measure specifically for histogram data, which allows to compare histograms with different number of bins and bin widths (so called general histogram). Proposed dissimilarity measure is then used as a measure for clustering. Furthermore, linkage method based on weighted averages is proposed with the concept of cluster compactness to measure the quality of clustering. The method is then validated with application on real datasets. As a result, the proposed dissimilarity measure is found producing adequate and comparable results with general histograms without the loss of detail or need to transform the data.Keywords: dissimilarity measure, hierarchical clustering, histograms, symbolic data analysis
Procedia PDF Downloads 16224803 WiFi Data Offloading: Bundling Method in a Canvas Business Model
Authors: Majid Mokhtarnia, Alireza Amini
Abstract:
Mobile operators deal with increasing in the data traffic as a critical issue. As a result, a vital responsibility of the operators is to deal with such a trend in order to create added values. This paper addresses a bundling method in a Canvas business model in a WiFi Data Offloading (WDO) strategy by which some elements of the model may be affected. In the proposed method, it is supposed to sell a number of data packages for subscribers in which there are some packages with a free given volume of data-offloaded WiFi complimentary. The paper on hands analyses this method in the views of attractiveness and profitability. The results demonstrate that the quality of implementation of the WDO strongly affects the final result and helps the decision maker to make the best one.Keywords: bundling, canvas business model, telecommunication, WiFi data offloading
Procedia PDF Downloads 20024802 Climate Change and Dengue Transmission in Lahore, Pakistan
Authors: Sadia Imran, Zenab Naseem
Abstract:
Dengue fever is one of the most alarming mosquito-borne viral diseases. Dengue virus has been distributed over the years exponentially throughout the world be it tropical or sub-tropical regions of the world, particularly in the last ten years. Changing topography, climate change in terms of erratic seasonal trends, rainfall, untimely monsoon early or late and longer or shorter incidences of either summer or winter. Globalization, frequent travel throughout the world and viral evolution has lead to more severe forms of Dengue. Global incidence of dengue infections per year have ranged between 50 million and 200 million; however, recent estimates using cartographic approaches suggest this number is closer to almost 400 million. In recent years, Pakistan experienced a deadly outbreak of the disease. The reason could be that they have the maximum exposure outdoors. Public organizations have observed that changing climate, especially lower average summer temperature, and increased vegetation have created tropical-like conditions in the city, which are suitable for Dengue virus growth. We will conduct a time-series analysis to study the interrelationship between dengue incidence and diurnal ranges of temperature and humidity in Pakistan, Lahore being the main focus of our study. We have used annual data from 2005 to 2015. We have investigated the relationship between climatic variables and dengue incidence. We used time series analysis to describe temporal trends. The result shows rising trends of Dengue over the past 10 years along with the rise in temperature & rainfall in Lahore. Hence this seconds the popular statement that the world is suffering due to Climate change and Global warming at different levels. Disease outbreak is one of the most alarming indications of mankind heading towards destruction and we need to think of mitigating measures to control epidemic from spreading and enveloping the cities, countries and regions.Keywords: Dengue, epidemic, globalization, climate change
Procedia PDF Downloads 23324801 Distributed Perceptually Important Point Identification for Time Series Data Mining
Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung
Abstract:
In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining
Procedia PDF Downloads 43324800 Dynamic Analysis of Composite Doubly Curved Panels with Variable Thickness
Authors: I. Algul, G. Akgun, H. Kurtaran
Abstract:
Dynamic analysis of composite doubly curved panels with variable thickness subjected to different pulse types using Generalized Differential Quadrature method (GDQ) is presented in this study. Panels with variable thickness are used in the construction of aerospace and marine industry. Giving variable thickness to panels can allow the designer to get optimum structural efficiency. For this reason, estimating the response of variable thickness panels is very important to design more reliable structures under dynamic loads. Dynamic equations for composite panels with variable thickness are obtained using virtual work principle. Partial derivatives in the equation of motion are expressed with GDQ and Newmark average acceleration scheme is used for temporal discretization. Several examples are used to highlight the effectiveness of the proposed method. Results are compared with finite element method. Effects of taper ratios, boundary conditions and loading type on the response of composite panel are investigated.Keywords: differential quadrature method, doubly curved panels, laminated composite materials, small displacement
Procedia PDF Downloads 36024799 Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks
Authors: Philipp Ruf, Massiwa Chabbi, Christoph Reich, Djaffar Ould-Abdeslam
Abstract:
In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN.Keywords: CNN, image processing, tabular data, mixed dataset, data transformation, multimodal fusion
Procedia PDF Downloads 12324798 Knowledge Discovery and Data Mining Techniques in Textile Industry
Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler
Abstract:
This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.Keywords: data mining, textile production, decision trees, classification
Procedia PDF Downloads 34924797 A Deterministic Large Deviation Model Based on Complex N-Body Systems
Authors: David C. Ni
Abstract:
In the previous efforts, we constructed N-Body Systems by an extended Blaschke product (EBP), which represents a non-temporal and nonlinear extension of Lorentz transformation. In this construction, we rely only on two parameters, nonlinear degree, and relative momentum to characterize the systems. We further explored root computation via iteration with an algorithm extended from Jenkins-Traub method. The solution sets demonstrate a form of σ+ i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various canonical distributions. In this paper, we correlate the convergent sets in the original domain with solution sets, which demonstrating large-deviation distributions in the codomain. We proceed to compare our approach with the formula or principles, such as Donsker-Varadhan and Wentzell-Freidlin theories. The deterministic model based on this construction allows us to explore applications in the areas of finance and statistical mechanics.Keywords: nonlinear Lorentz transformation, Blaschke equation, iteration solutions, root computation, large deviation distribution, deterministic model
Procedia PDF Downloads 39324796 Investigation of Delivery of Triple Play Data in GE-PON Fiber to the Home Network
Authors: Ashima Anurag Sharma
Abstract:
Optical fiber based networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This research paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparison between various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 52724795 Microarray Gene Expression Data Dimensionality Reduction Using PCA
Authors: Fuad M. Alkoot
Abstract:
Different experimental technologies such as microarray sequencing have been proposed to generate high-resolution genetic data, in order to understand the complex dynamic interactions between complex diseases and the biological system components of genes and gene products. However, the generated samples have a very large dimension reaching thousands. Therefore, hindering all attempts to design a classifier system that can identify diseases based on such data. Additionally, the high overlap in the class distributions makes the task more difficult. The data we experiment with is generated for the identification of autism. It includes 142 samples, which is small compared to the large dimension of the data. The classifier systems trained on this data yield very low classification rates that are almost equivalent to a guess. We aim at reducing the data dimension and improve it for classification. Here, we experiment with applying a multistage PCA on the genetic data to reduce its dimensionality. Results show a significant improvement in the classification rates which increases the possibility of building an automated system for autism detection.Keywords: PCA, gene expression, dimensionality reduction, classification, autism
Procedia PDF Downloads 56024794 Investigation of Projected Organic Waste Impact on a Tropical Wetland in Singapore
Authors: Swee Yang Low, Dong Eon Kim, Canh Tien Trinh Nguyen, Yixiong Cai, Shie-Yui Liong
Abstract:
Nee Soon swamp forest is one of the last vestiges of tropical wetland in Singapore. Understanding the hydrological regime of the swamp forest and implications for water quality is critical to guide stakeholders in implementing effective measures to preserve the wetland against anthropogenic impacts. In particular, although current field measurement data do not indicate a concern with organic pollution, reviewing the ways in which the wetland responds to elevated organic waste influx (and the corresponding impact on dissolved oxygen, DO) can help identify potential hotspots, and the impact on the outflow from the catchment which drains into downstream controlled watercourses. An integrated water quality model is therefore developed in this study to investigate spatial and temporal concentrations of DO levels and organic pollution (as quantified by biochemical oxygen demand, BOD) within the catchment’s river network under hypothetical, projected scenarios of spiked upstream inflow. The model was developed using MIKE HYDRO for modelling the study domain, as well as the MIKE ECO Lab numerical laboratory for characterising water quality processes. Model parameters are calibrated against time series of observed discharges at three measurement stations along the river network. Over a simulation period of April 2014 to December 2015, the calibrated model predicted that a continuous spiked inflow of 400 mg/l BOD will elevate downstream concentrations at the catchment outlet to an average of 12 mg/l, from an assumed nominal baseline BOD of 1 mg/l. Levels of DO were decreased from an initial 5 mg/l to 0.4 mg/l. Though a scenario of spiked organic influx at the swamp forest’s undeveloped upstream sub-catchments is currently unlikely to occur, the outcomes nevertheless will be beneficial for future planning studies in understanding how the water quality of the catchment will be impacted should urban redevelopment works be considered around the swamp forest.Keywords: hydrology, modeling, water quality, wetland
Procedia PDF Downloads 14024793 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic
Authors: Fei Gao, Rodolfo C. Raga Jr.
Abstract:
This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle
Procedia PDF Downloads 7524792 Impacts of Climate Elements on the Annual Periodic Behavior of the Shallow Groundwater Level: Case Study from Central-Eastern Europe
Authors: Tamas Garamhegyi, Jozsef Kovacs, Rita Pongracz, Peter Tanos, Balazs Trasy, Norbert Magyar, Istvan G. Hatvani
Abstract:
Like most environmental processes, shallow groundwater fluctuation under natural circumstances also behaves periodically. With the statistical tools at hand, it can easily be determined if a period exists in the data or not. Thus, the question may be raised: Does the estimated average period time characterize the whole time period, or not? This is especially important in the case of such complex phenomena as shallow groundwater fluctuation, driven by numerous factors. Because of the continuous changes in the oscillating components of shallow groundwater time series, the most appropriate method should be used to investigate its periodicity, this is wavelet spectrum analysis. The aims of the research were to investigate the periodic behavior of the shallow groundwater time series of an agriculturally important and drought sensitive region in Central-Eastern Europe and its relationship to the European pressure action centers. During the research ~216 shallow groundwater observation wells located in the eastern part of the Great Hungarian Plain with a temporal coverage of 50 years were scanned for periodicity. By taking the full-time interval as 100%, the presence of any period could be determined in percentages. With the complex hydrogeological/meteorological model developed in this study, non-periodic time intervals were found in the shallow groundwater levels. On the local scale, this phenomenon linked to drought conditions, and on a regional scale linked to the maxima of the regional air pressures in the Gulf of Genoa. The study documented an important link between shallow groundwater levels and climate variables/indices facilitating the necessary adaptation strategies on national and/or regional scales, which have to take into account the predictions of drought-related climatic conditions.Keywords: climate change, drought, groundwater periodicity, wavelet spectrum and coherence analyses
Procedia PDF Downloads 38524791 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 9424790 Analyzing Land use change and its impacts on the Urban Environment in a Fast Growing Metropolitan City of Pakistan
Authors: Muhammad Nasar-u-Minallah, Dagmar Haase, Salman Qureshi
Abstract:
In a rapidly growing developing country cities are becoming more urbanized leading to modifications in urban climate. Rapid urbanization, especially unplanned urban land expansion, together with climate change has a profound impact on the urban settlement and urban thermal environment. Cities, particularly Pakistan are facing remarkably environmental issues and uneven development, and thus it is important to strengthen the investigation of urban environmental pressure brought by land-use changes and urbanization. The present study investigated the long term modification of the urban environment by urbanization utilizing Spatio-temporal dynamics of land-use change, urban population data, urban heat islands, monthly maximum, and minimum temperature of thirty years, multi remote sensing imageries, and spectral indices such as Normalized Difference Built-up Index and Normalized Difference Vegetation Index. The results indicate rapid growth in an urban built-up area and a reduction in vegetation cover in the last three decades (1990-2020). A positive correlation between urban heat islands and Normalized Difference Built-up Index, whereas a negative correlation between urban heat islands and the Normalized Difference Vegetation Index clearly shows how urbanization is affecting the local environment. The increase in air and land surface temperature temperatures is dangerous to human comfort. Practical approaches, such as increasing the urban green spaces and proper planning of the cities, have been suggested to help prevent further modification of the urban thermal environment by urbanization. The findings of this work are thus important for multi-sectorial use in the cities of Pakistan. By taking into consideration these results, the urban planners, decision-makers, and local government can make different policies to mitigate the urban land use impacts on the urban thermal environment in Pakistan.Keywords: land use, urban environment, local climate, Lahore
Procedia PDF Downloads 11024789 The Temporal Pattern of Bumble Bees in Plant Visiting
Authors: Zahra Shakoori, Farid Salmanpour
Abstract:
Pollination services are a vital service for the ecosystem to maintain environmental stability. The decline of pollinators can disrupt the ecological balance by affecting components of biodiversity. Bumble bees are crucial pollinators, playing a vital role in maintaining plant diversity. This study investigated the temporal patterns of their visitation to flowers in Kiasar National Park, Iran. Observations were conducted in Jun 2024, totaling 442 person-minutes of observation. Five species of bumble bees were identified. The study revealed that they consistently visited an average of 12-15 flowers per minute, regardless of species. The findings highlight the importance of protecting natural habitats, where their populations are thriving in the absence of human-induced stressors. This study was conducted in Kiasar National Park, located in the southeast of Mazandaran, northern Iran. The surveyed area, at an altitude of 1800-2200 meters, includes both forest and pasture. Bumble bee surveys were carried out on sunny days from June 2024, starting at dawn and ending at sunset. To avoid double-counting, we systematically searched for foraging habitats on low-sloping ridges with high mud density, frequently moving between patches. We recorded bumble bee visits to flowers and plant species per minute using direct observation, a stopwatch, and a pre-prepared form. We used statistical analysis of variance (ANOVA) with a confidence level of 95% to examine potential differences in foraging rates across different bumble bee species, flowers, plant bases, and plant species visited. Bumble bee identification relied on morphological indicators. A total of 442 person-minutes of bumble bee observations were recorded. Five species of bumble bees (Bombus fragrans, Bombus haematurus, Bombus lucorum, Bombus melanurus, Bombus terrestris) were identified during the study. The results of this study showed that the visits of bumble bees to flower sources were not different from each other. In general, bumble bees visit an average of 12-15 flowers every 60 seconds. In addition, at the same time they visit between 3-5 plant bases. Finally, they visit an average of 1 to 3 plant species per minute. While many taxa contribute to pollination, insects—especially bees—are crucial for maintaining plant diversity and ecosystem functions. As plant diversity increases, the stopping rate of pollinating insects rises, which reduces their foraging activity. Bumble bees, therefore, stop more frequently in natural areas than in agricultural fields due to higher plant diversity. Our findings emphasize the need to protect natural habitats like Kiasar National Park, where bumble bees thrive without human-induced stressors like pesticides, livestock grazing, and pollution. With bumble bee populations declining globally, further research is essential to understand their behavior in different environments and develop effective conservation strategies to protect them.Keywords: bumble bees, pollination, pollinator, plant diversity, Iran
Procedia PDF Downloads 2824788 Ophthalmic Services Covered by Albasar International Foundation in Sudan
Authors: Mohammad Ibrahim
Abstract:
The study was conducted at Albasar international foundation ophthalmic hospitals in Sudan to study the burden and patterns of ophthalmic disorder in the sector. Review of the hospitals records revealed that the total number of patient examined in the hospitals and outreached camps conducted by the hospitals is 10,513,874, the total number of surgeries is 694,015 and the total number of pupils at school program is 230,382. The organization working with the highest management system and standards and quality result based planning. The study yielded that the ophthalmic problem in Sudan are of great percentage and the temporal blindness disorder are high since major cases and surgeries were Cataract (57.8%). Retinal problem (2.9%), Glaucoma (2.4%), Orbit and Occulo-plastic disorders (2.2%) other disorders are refractive errors, squint and strabismus, Corneal, Pediatrics and minor ophthalmic disorders.Keywords: hospitals and outreach ophthalmic services, largest coverage of ophthalmic services, nonprofitable ophthalmic services, strong management system and standards
Procedia PDF Downloads 41024787 Robust Numerical Solution for Flow Problems
Authors: Gregor Kosec
Abstract:
Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.Keywords: fluid flow, meshless, low Pr problem, natural convection
Procedia PDF Downloads 23324786 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption
Authors: Waziri Victor Onomza, John K. Alhassan, Idris Ismaila, Noel Dogonyaro Moses
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute theoretical presentations in high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.Keywords: big data analytics, security, privacy, bootstrapping, homomorphic, homomorphic encryption scheme
Procedia PDF Downloads 37924785 Energy Self-Sufficiency Through Smart Micro-Grids and Decentralised Sector-Coupling
Authors: C. Trapp, A. Vijay, M. Khorasani
Abstract:
Decentralised micro-grids with sector coupling can combat the spatial and temporal intermittence of renewable energy by combining power, transportation and infrastructure sectors. Intelligent energy conversion concepts such as electrolysers, hydrogen engines and fuel cells combined with energy storage using intelligent batteries and hydrogen storage form the back-bone of such a system. This paper describes a micro-grid based on Photo-Voltaic cells, battery storage, innovative modular and scalable Anion Exchange Membrane (AEM) electrolyzer with an efficiency of up to 73%, high-pressure hydrogen storage as well as cutting-edge combustion-engine based Combined Heat and Power (CHP) plant with more than 85% efficiency at the university campus to address the challenges of decarbonization whilst eliminating the necessity for expensive high-voltage infrastructure.Keywords: sector coupling, micro-grids, energy self-sufficiency, decarbonization, AEM electrolysis, hydrogen CHP
Procedia PDF Downloads 18324784 Protecting Privacy and Data Security in Online Business
Authors: Bilquis Ferdousi
Abstract:
With the exponential growth of the online business, the threat to consumers’ privacy and data security has become a serious challenge. This literature review-based study focuses on a better understanding of those threats and what legislative measures have been taken to address those challenges. Research shows that people are increasingly involved in online business using different digital devices and platforms, although this practice varies based on age groups. The threat to consumers’ privacy and data security is a serious hindrance in developing trust among consumers in online businesses. There are some legislative measures taken at the federal and state level to protect consumers’ privacy and data security. The study was based on an extensive review of current literature on protecting consumers’ privacy and data security and legislative measures that have been taken.Keywords: privacy, data security, legislation, online business
Procedia PDF Downloads 10624783 Flowing Online Vehicle GPS Data Clustering Using a New Parallel K-Means Algorithm
Authors: Orhun Vural, Oguz Bayat, Rustu Akay, Osman N. Ucan
Abstract:
This study presents a new parallel approach clustering of GPS data. Evaluation has been made by comparing execution time of various clustering algorithms on GPS data. This paper aims to propose a parallel based on neighborhood K-means algorithm to make it faster. The proposed parallelization approach assumes that each GPS data represents a vehicle and to communicate between vehicles close to each other after vehicles are clustered. This parallelization approach has been examined on different sized continuously changing GPS data and compared with serial K-means algorithm and other serial clustering algorithms. The results demonstrated that proposed parallel K-means algorithm has been shown to work much faster than other clustering algorithms.Keywords: parallel k-means algorithm, parallel clustering, clustering algorithms, clustering on flowing data
Procedia PDF Downloads 221