Search results for: spatial and temporal data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26825

Search results for: spatial and temporal data

24455 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis

Authors: Meng Su

Abstract:

High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.

Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis

Procedia PDF Downloads 108
24454 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow

Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun

Abstract:

With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.

Keywords: cloud storage security, sharing storage, attributes, Hash algorithm

Procedia PDF Downloads 390
24453 Strengthening the Security of the Thai-Myanmar Border Trade of the People in the Mae Sot Customs Checkpoint Area, Tak Province

Authors: Sakapas Saengchai

Abstract:

A Study on Strengthening the Security of the Thai-Myanmar Border Trade Area of the people in the Mae Sot customs checkpoint area, Tak province, was designed as a qualitative research study. Its objectives were to study the principles of strengthening border trade security and enhancing people's participation. To develop a border trade model that enhances the spatial economy and improves people's quality of life by collecting data using a participant observation method. In-depth interview group chats border checkpoint administrators, Mae Sot customs checkpoint, Tak province, private entrepreneurs, community leaders, and the opening of a community forum to exchange opinions with people in the area. The results of the study found that 1. Security development is to promote crime reduction. Reduce drug trafficking problems Smuggling and human trafficking have been reduced. Including planning and preparation to protect people from terrorism, epidemics, and communicable diseases, including cooperation with Burma on border rules for people and workers, 2. Wealth development is to promote investment. Transport links value chain logistics Cross-border goods and services on the Thai-Myanmar border Both amending regulations and laws to promote fair trade. Emphasis on convenient and fast service as well as promoting the Thai border area to be a tourist attraction that can create prosperity and income for the community in the area By using balanced natural resources, with production and consumption that are environmentally friendly, and emphasizes the participation of the public sector, the private sector, and people from all sectors in the sustainable development of the Thai border.

Keywords: security, border trade, customs, participation, people

Procedia PDF Downloads 181
24452 The Study on Life of Valves Evaluation Based on Tests Data

Authors: Binjuan Xu, Qian Zhao, Ping Jiang, Bo Guo, Zhijun Cheng, Xiaoyue Wu

Abstract:

Astronautical valves are key units in engine systems of astronautical products; their reliability will influence results of rocket or missile launching, even lead to damage to staff and devices on the ground. Besides failure in engine system may influence the hitting accuracy and flight shot of missiles. Therefore high reliability is quite essential to astronautical products. There are quite a few literature doing research based on few failure test data to estimate valves’ reliability, thus this paper proposed a new method to estimate valves’ reliability, according to the corresponding tests of different failure modes, this paper takes advantage of tests data which acquired from temperature, vibration, and action tests to estimate reliability in every failure modes, then this paper has regarded these three kinds of tests as three stages in products’ process to integrate these results to acquire valves’ reliability. Through the comparison of results achieving from tests data and simulated data, the results have illustrated how to obtain valves’ reliability based on the few failure data with failure modes and prove that the results are effective and rational.

Keywords: censored data, temperature tests, valves, vibration tests

Procedia PDF Downloads 345
24451 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions

Procedia PDF Downloads 306
24450 Collision Detection Algorithm Based on Data Parallelism

Authors: Zhen Peng, Baifeng Wu

Abstract:

Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.

Keywords: data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability

Procedia PDF Downloads 289
24449 Changing Arbitrary Data Transmission Period by Using Bluetooth Module on Gas Sensor Node of Arduino Board

Authors: Hiesik Kim, Yong-Beom Kim, Jaheon Gu

Abstract:

Internet of Things (IoT) applications are widely serviced and spread worldwide. Local wireless data transmission technique must be developed to rate up with some technique. Bluetooth wireless data communication is wireless technique is technique made by Special Inter Group (SIG) using the frequency range 2.4 GHz, and it is exploiting Frequency Hopping to avoid collision with a different device. To implement experiment, equipment for experiment transmitting measured data is made by using Arduino as open source hardware, gas sensor, and Bluetooth module and algorithm controlling transmission rate is demonstrated. Experiment controlling transmission rate also is progressed by developing Android application receiving measured data, and controlling this rate is available at the experiment result. It is important that in the future, improvement for communication algorithm be needed because a few error occurs when data is transferred or received.

Keywords: Arduino, Bluetooth, gas sensor, IoT, transmission

Procedia PDF Downloads 278
24448 Monitoring of Quantitative and Qualitative Changes in Combustible Material in the Białowieża Forest

Authors: Damian Czubak

Abstract:

The Białowieża Forest is a very valuable natural area, included in the World Natural Heritage at UNESCO, where, due to infestation by the bark beetle (Ips typographus), norway spruce (Picea abies) have deteriorated. This catastrophic scenario led to an increase in fire danger. This was due to the occurrence of large amounts of dead wood and grass cover, as light penetrated to the bottom of the stands. These factors in a dry state are materials that favour the possibility of fire and the rapid spread of fire. One of the objectives of the study was to monitor the quantitative and qualitative changes of combustible material on the permanent decay plots of spruce stands from 2012-2022. In addition, the size of the area with highly flammable vegetation was monitored and a classification of the stands of the Białowieża Forest by flammability classes was made. The key factor that determines the potential fire hazard of a forest is combustible material. Primarily its type, quantity, moisture content, size and spatial structure. Based on the inventory data on the areas of forest districts in the Białowieża Forest, the average fire load and its changes over the years were calculated. The analysis was carried out taking into account the changes in the health status of the stands and sanitary operations. The quantitative and qualitative assessment of fallen timber and fire load of ground cover used the results of the 2019 and 2021 inventories. Approximately 9,000 circular plots were used for the study. An assessment was made of the amount of potential fuel, understood as ground cover vegetation and dead wood debris. In addition, monitoring of areas with vegetation that poses a high fire risk was conducted using data from 2019 and 2021. All sub-areas were inventoried where vegetation posing a specific fire hazard represented at least 10% of the area with species characteristic of that cover. In addition to the size of the area with fire-prone vegetation, a very important element is the size of the fire load on the indicated plots. On representative plots, the biomass of the land cover was measured on an area of 10 m2 and then the amount of biomass of each component was determined. The resulting element of variability of ground covers in stands was their flammability classification. The classification developed made it possible to track changes in the flammability classes of stands over the period covered by the measurements.

Keywords: classification, combustible material, flammable vegetation, Norway spruce

Procedia PDF Downloads 93
24447 Real-Time Sensor Fusion for Mobile Robot Localization in an Oil and Gas Refinery

Authors: Adewole A. Ayoade, Marshall R. Sweatt, John P. H. Steele, Qi Han, Khaled Al-Wahedi, Hamad Karki, William A. Yearsley

Abstract:

Understanding the behavioral characteristics of sensors is a crucial step in fusing data from several sensors of different types. This paper introduces a practical, real-time approach to integrate heterogeneous sensor data to achieve higher accuracy than would be possible from any one individual sensor in localizing a mobile robot. We use this approach in both indoor and outdoor environments and it is especially appropriate for those environments like oil and gas refineries due to their sparse and featureless nature. We have studied the individual contribution of each sensor data to the overall combined accuracy achieved from the fusion process. A Sequential Update Extended Kalman Filter(EKF) using validation gates was used to integrate GPS data, Compass data, WiFi data, Inertial Measurement Unit(IMU) data, Vehicle Velocity, and pose estimates from Fiducial marker system. Results show that the approach can enable a mobile robot to navigate autonomously in any environment using a priori information.

Keywords: inspection mobile robot, navigation, sensor fusion, sequential update extended Kalman filter

Procedia PDF Downloads 471
24446 A Unified Deep Framework for Joint 3d Pose Estimation and Action Recognition from a Single Color Camera

Authors: Huy Hieu Pham, Houssam Salmane, Louahdi Khoudour, Alain Crouzil, Pablo Zegers, Sergio Velastin

Abstract:

We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from color video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important key points of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the Spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, Microsoft Research Redmond (MSR) Action3D, and Stony Brook University (SBU) Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference.

Keywords: human action recognition, pose estimation, D-CNN, deep learning

Procedia PDF Downloads 146
24445 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities

Authors: Salman Naseer

Abstract:

One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.

Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission

Procedia PDF Downloads 142
24444 The Concentration of Natural Alpha Emitters Radionuclides in Fish and Their Contribution to the Internal Dose

Authors: Wagner Pereira, Alphonse Kelecom

Abstract:

Mining can impact the environment, and the major impact of some mining activities is the radiological impact. In human populations, such impact is well studied and regulated. For biota, this assessment always had as focus the protection of human food chain. The protection of biota itself is a new approach, still developing. In order to contribute to this new approach, fish collecting was carried out in areas of naturally occurring radioactive materials (NORM), where a uranium mine is in decommissioning phase. The activity concentrations were analyzed, in Bq/kg wet weight, for Uranium (Unat), Th-232 and Ra-226 in the lambari fish Astyanax bimaculatus L. (omnivorous fish) and in the traíra fish Hoplias malabaricus Bloch, 1794 (carnivorous fish). Seven composite samples (that is: a sufficient number of individuals to reach at least 2 kg of fresh weight) were collected every six months between 2013 and 2015. The mean activity concentrations (AC) for uranium ranged from 1.12 (lambari) to 0.60 (lungfish). For Th, variations ranged from 0.30 to 0.05 (lambari and traíra, respectively). Finally, the Ra-226 means ranged between 0.08 and 0.03. No temporal trends of accumulation could be identified. Systematically, the AC values of radionuclides were higher in omnivorous fish when compared to the carnivore ones.

Keywords: biota dose, NORM, fish, environmental protection

Procedia PDF Downloads 258
24443 The Findings EEG-LORETA about Epilepsy

Authors: Leila Maleki, Ahmad Esmali Kooraneh, Hossein Taghi Derakhshi

Abstract:

Neural activity in the human brain starts from the early stages of prenatal development. This activity or signals generated by the brain are electrical in nature and represent not only the brain function but also the status of the whole body. At the present moment, three methods can record functional and physiological changes within the brain with high temporal resolution of neuronal interactions at the network level: the electroencephalogram (EEG), the magnet oencephalogram (MEG), and functional magnetic resonance imaging (fMRI); each of these has advantages and shortcomings. EEG recording with a large number of electrodes is now feasible in clinical practice. Multichannel EEG recorded from the scalp surface provides a very valuable but indirect information about the source distribution. However, deep electrode measurements yield more reliable information about the source locations، Intracranial recordings and scalp EEG are used with the source imaging techniques to determine the locations and strengths of the epileptic activity. As a source localization method, Low Resolution Electro-Magnetic Tomography (LORETA) is solved for the realistic geometry based on both forward methods, the Boundary Element Method (BEM) and the Finite Difference Method (FDM). In this paper, we review The findings EEG- LORETA about epilepsy.

Keywords: epilepsy, EEG, EEG-LORETA

Procedia PDF Downloads 545
24442 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm

Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy

Abstract:

IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.

Keywords: IoT, fog networks, data stewardship, dynamic access policy

Procedia PDF Downloads 59
24441 An Automated Approach to Consolidate Galileo System Availability

Authors: Marie Bieber, Fabrice Cosson, Olivier Schmitt

Abstract:

Europe's Global Navigation Satellite System, Galileo, provides worldwide positioning and navigation services. The satellites in space are only one part of the Galileo system. An extensive ground infrastructure is essential to oversee the satellites and ensure accurate navigation signals. High reliability and availability of the entire Galileo system are crucial to continuously provide positioning information of high quality to users. Outages are tracked, and operational availability is regularly assessed. A highly flexible and adaptive tool has been developed to automate the Galileo system availability analysis. Not only does it enable a quick availability consolidation, but it also provides first steps towards improving the data quality of maintenance tickets used for the analysis. This includes data import and data preparation, with a focus on processing strings used for classification and identifying faulty data. Furthermore, the tool allows to handle a low amount of data, which is a major constraint when the aim is to provide accurate statistics.

Keywords: availability, data quality, system performance, Galileo, aerospace

Procedia PDF Downloads 167
24440 Experimental Study of Local Scour Downstream of Cylindrical Bridge Piers

Authors: Mohammed Traeq Shukri

Abstract:

Scour is a natural phenomenon caused by the erosive action of flowing stream on alluvial beds, which removes the sediment around or near structures located in flowing water. It means the lowering of the riverbed level by water erosions such that there is a tendency to expose the foundations of a structure. It is the result of the erosive action of flowing water, excavating and carrying away material from the bed and banks of streams and from around the piers of bridges. The failure of bridges due to excessive local scour during floods poses a challenging problem to hydraulic engineers. The failure of bridges piers is due to many reasons such as localized scour combined with general riverbed degradation. In this paper, we try to estimate the temporal variation of scour depth at non-uniform cylindrical bridge pier, by experimental work in civil engineering hydraulic laboratories of Gaziantep University on a channel have dimensions of 8.3m length, 0.8m width and 0.9m depth. The experiments will be carried on 20 cm depth of sediment layer having d50=0.4 mm. Three bridge pier shapes having different scaled models will be constructed in a 1.5m of test section in the channel.

Keywords: scour, local scour, bridge piers, scour depth, vortex, horseshoe vortex

Procedia PDF Downloads 164
24439 Use of In-line Data Analytics and Empirical Model for Early Fault Detection

Authors: Hyun-Woo Cho

Abstract:

Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.

Keywords: batch process, monitoring, measurement, kernel method

Procedia PDF Downloads 323
24438 Periurban Landscape as an Opportunity Field to Solve Ecological Urban Conflicts

Authors: Cristina Galiana Carballo, Ibon Doval Martínez

Abstract:

Urban boundaries often result in a controversial limit between countryside and city in Europe. This territory is normally defined by the very limited land uses and the abundance of open space. The dimension and dynamics of peri-urbanization in the last decades have increased this land stock, which has influenced/impacted in several factors in terms of economic costs (maintenance, transport), ecological disturbances of the territory and changes in inhabitant´s behaviour. In an increasingly urbanised world and a growing urban population, cities also face challenges such as Climate Change. In this context, new near-future corrective trends including circular economies for local food supply or decentralised waste management became key strategies towards more sustainable urban models. Those new solutions need to be planned and implemented considering the potential conflict with current land uses. The city of Vitoria-Gasteiz (Basque Country, Spain) has triplicated land consumption per habitant in 10 years, resulting in a vast extension of low-density urban type confronting rural land and threatening agricultural uses, landscape and urban sustainability. Urban planning allows managing and optimum use allocation based on soil vocation and socio-ecosystem needs, while peri-urban space arises as an opportunity for developing different uses which do not match either within the compact city, not in open agricultural lands, such as medium-size agrocomposting systems or biomass plants. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Climate change and circular economy were identified as frameworks where to determine future land, soil vocation and urban planning requirements which eventually become estimations of required local food and renewable energy supply along with alternative waste management system´s implementation. By means of it, it has been developed an urban planning proposal which overcomes urban-non urban dichotomy in Vitoria-Gasteiz. The proposal aims to enhance rural system and improve urban sustainability performance through the normative recognition of an agricultural peri-urban belt.

Keywords: landscape ecology, land-use management, periurban, urban planning

Procedia PDF Downloads 163
24437 The Impact of the General Data Protection Regulation on Human Resources Management in Schools

Authors: Alexandra Aslanidou

Abstract:

The General Data Protection Regulation (GDPR), concerning the protection of natural persons within the European Union with regard to the processing of personal data and on the free movement of such data, became applicable in the European Union (EU) on 25 May 2018 and transformed the way personal data were being treated under the Data Protection Directive (DPD) regime, generating sweeping organizational changes to both public sector and business. A social practice that is considerably influenced in the way of its day-to-day operations is Human Resource (HR) management, for which the importance of GDPR cannot be underestimated. That is because HR processes personal data coming in all shapes and sizes from many different systems and sources. The significance of the proper functioning of an HR department, specifically in human-centered, service-oriented environments such as the education field, is decisive due to the fact that HR operations in schools, conducted effectively, determine the quality of the provided services and consequently have a considerable impact on the success of the educational system. The purpose of this paper is to analyze the decisive role that GDPR plays in HR departments that operate in schools and in order to practically evaluate the aftermath of the Regulation during the first months of its applicability; a comparative use cases analysis in five highly dynamic schools, across three EU Member States, was attempted.

Keywords: general data protection regulation, human resource management, educational system

Procedia PDF Downloads 100
24436 Solar-Powered Adsorption Cooling System: A Case Study on the Climatic Conditions of Al Minya

Authors: El-Sadek H. Nour El-deen, K. Harby

Abstract:

Energy saving and environment friendly applications are turning out to be one of the most important topics nowadays. In this work, a simulation analysis using TRNSYS software has been carried out to study the benefit of employing a solar adsorption cooling system under the climatic conditions of Al-Minya city, Egypt. A theoretical model was carried out on a two bed adsorption cooling system employing granular activated carbon-HFC-404A as working pair. Temporal and averaged history of solar collector, adsorbent beds, evaporator and condenser has been shown. System performance in terms of daily average cooling capacity and average coefficient of performance around the year has been investigated. The results showed that maximum yearly average coefficient of performance (COP) and cooling capacity are about 0.26 and 8 kW respectively. The maximum value of the both average cooling capacity and COP cyclic is directly proportional to the maximum solar radiation. The system performance was found to be increased with the average ambient temperature. Finally, the proposed solar powered adsorption cooling systems can be used effectively under Al-Minya climatic conditions.

Keywords: adsorption, cooling, Egypt, environment, solar energy

Procedia PDF Downloads 160
24435 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment

Authors: P. Venu, Joeju M. Issac

Abstract:

Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.

Keywords: hybrid data handler, QFD, prioritization, module-based deployment

Procedia PDF Downloads 297
24434 Energy Refurbishment of University Building in Cold Italian Climate: Energy Audit and Performance Optimization

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The Directive 2010/31/EC 'Directive of the European Parliament and of the Council of 19 may 2010 on the energy performance of buildings' moved the targets of the previous version toward more ambitious targets, for instance by establishing that, by 31 December 2020, all new buildings should demand nearly zero-energy. Moreover, the demonstrative role of public buildings is strongly affirmed so that also the target nearly zero-energy buildings is anticipated, in January 2019. On the other hand, given the very low turn-over rate of buildings (in Europe, it ranges between 1-3%/yearly), each policy that does not consider the renovation of the existing building stock cannot be effective in the short and medium periods. According to this proposal, the study provides a novel, holistic approach to design the refurbishment of educational buildings in colder cities of Mediterranean regions enabling stakeholders to understand the uncertainty to use numerical modelling and the real environmental and economic impacts of adopting some energy efficiency technologies. The case study is a university building of Molise region in the centre of Italy. The proposed approach is based on the application of the cost-optimal methodology as it is shown in the Delegate Regulation 244/2012 and Guidelines of the European Commission, for evaluating the cost-optimal level of energy performance with a macroeconomic approach. This means that the refurbishment scenario should correspond to the configuration that leads to lowest global cost during the estimated economic life-cycle, taking into account not only the investment cost but also the operational costs, linked to energy consumption and polluting emissions. The definition of the reference building has been supported by various in-situ surveys, investigations, evaluations of the indoor comfort. Data collection can be divided into five categories: 1) geometrical features; 2) building envelope audit; 3) technical system and equipment characterization; 4) building use and thermal zones definition; 5) energy building data. For each category, the required measures have been indicated with some suggestions for the identifications of spatial distribution and timing of the measurements. With reference to the case study, the collected data, together with a comparison with energy bills, allowed a proper calibration of a numerical model suitable for the hourly energy simulation by means of EnergyPlus. Around 30 measures/packages of energy, efficiency measure has been taken into account both on the envelope than regarding plant systems. Starting from results, two-point will be examined exhaustively: (i) the importance to use validated models to simulate the present performance of building under investigation; (ii) the environmental benefits and the economic implications of a deep energy refurbishment of the educational building in cold climates.

Keywords: energy simulation, modelling calibration, cost-optimal retrofit, university building

Procedia PDF Downloads 178
24433 Possibilities to Evaluate the Climatic and Meteorological Potential for Viticulture in Poland: The Case Study of the Jagiellonian University Vineyard

Authors: Oskar Sekowski

Abstract:

Current global warming causes changes in the traditional zones of viticulture worldwide. During 20th century, the average global air temperature increased by 0.89˚C. The models of climate change indicate that viticulture, currently concentrating in narrow geographic niches, may move towards the poles, to higher geographic latitudes. Global warming may cause changes in traditional viticulture regions. Therefore, there is a need to estimate the climatic conditions and climate change in areas that are not traditionally associated with viticulture, e.g., Poland. The primary objective of this paper is to prepare methodology to evaluate the climatic and meteorological potential for viticulture in Poland based on a case study. Moreover, the additional aim is to evaluate the climatic potential of a mesoregion where a university vineyard is located. The daily data of temperature, precipitation, insolation, and wind speed (1988-2018) from the meteorological station located in Łazy, southern Poland, was used to evaluate 15 climatological parameters and indices connected with viticulture. The next steps of the methodology are based on Geographic Information System methods. The topographical factors such as a slope gradient and slope exposure were created using Digital Elevation Models. The spatial distribution of climatological elements was interpolated by ordinary kriging. The values of each factor and indices were also ranked and classified. The viticultural potential was determined by integrating two suitability maps, i.e., the topographical and climatic ones, and by calculating the average for each pixel. Data analysis shows significant changes in heat accumulation indices that are driven by increases in maximum temperature, mostly increasing number of days with Tmax > 30˚C. The climatic conditions of this mesoregion are sufficient for vitis vinifera viticulture. The values of indicators and insolation are similar to those in the known wine regions located on similar geographical latitudes in Europe. The smallest threat to viticulture in study area is the occurrence of hail and the highest occurrence of frost in the winter. This research provides the basis for evaluating general suitability and climatologic potential for viticulture in Poland. To characterize the climatic potential for viticulture, it is necessary to assess the suitability of all climatological and topographical factors that can influence viticulture. The methodology used in this case study shows places where there is a possibility to create vineyards. It may also be helpful for wine-makers to select grape varieties.

Keywords: climatologic potential, climatic classification, Poland, viticulture

Procedia PDF Downloads 106
24432 Predicting Groundwater Areas Using Data Mining Techniques: Groundwater in Jordan as Case Study

Authors: Faisal Aburub, Wael Hadi

Abstract:

Data mining is the process of extracting useful or hidden information from a large database. Extracted information can be used to discover relationships among features, where data objects are grouped according to logical relationships; or to predict unseen objects to one of the predefined groups. In this paper, we aim to investigate four well-known data mining algorithms in order to predict groundwater areas in Jordan. These algorithms are Support Vector Machines (SVMs), Naïve Bayes (NB), K-Nearest Neighbor (kNN) and Classification Based on Association Rule (CBA). The experimental results indicate that the SVMs algorithm outperformed other algorithms in terms of classification accuracy, precision and F1 evaluation measures using the datasets of groundwater areas that were collected from Jordanian Ministry of Water and Irrigation.

Keywords: classification, data mining, evaluation measures, groundwater

Procedia PDF Downloads 280
24431 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 31
24430 Jurisdictional Issues between Competition Law and Data Protection Law in Protection of Privacy of Online Consumers

Authors: Pankhudi Khandelwal

Abstract:

The revenue models of digital giants such as Facebook and Google, use targeted advertising for revenues. Such a model requires huge amounts of consumer data. While the data protection law deals with the protection of personal data, however, this data is acquired by the companies on the basis of consent, performance of a contract, or legitimate interests. This paper analyses the role that competition law can play in evading these loopholes for the protection of data and privacy of online consumers. Digital markets have certain distinctive features such as network effects and feedback loop, which gives incumbents of these markets a first-mover advantage. This creates a situation where the winner takes it all, thus creating entry barriers and concentration in the market. It has been also seen that this dominant position is then used by the undertakings for leveraging in other markets. This can be harmful to the consumers in form of less privacy, less choice, and stifling innovation, as seen in the cases of Facebook Cambridge Analytica, Google Shopping, and Google Android. Therefore, the article aims to provide a legal framework wherein the data protection law and competition law can come together to provide a balance in regulating digital markets. The issue has become more relevant in light of the Facebook decision by German competition authority, where it was held that Facebook had abused its dominant position by not complying with data protection rules, which constituted an exploitative practice. The paper looks into the jurisdictional boundaries that the data protection and competition authorities can work from and suggests ex ante regulation through data protection law and ex post regulation through competition law. It further suggests a change in the consumer welfare standard where harm to privacy should be considered as an indicator of low quality.

Keywords: data protection, dominance, ex ante regulation, ex post regulation

Procedia PDF Downloads 183
24429 Application of Knowledge Discovery in Database Techniques in Cost Overruns of Construction Projects

Authors: Mai Ghazal, Ahmed Hammad

Abstract:

Cost overruns in construction projects are considered as worldwide challenges since the cost performance is one of the main measures of success along with schedule performance. To overcome this problem, studies were conducted to investigate the cost overruns' factors, also projects' historical data were analyzed to extract new and useful knowledge from it. This research is studying and analyzing the effect of some factors causing cost overruns using the historical data from completed construction projects. Then, using these factors to estimate the probability of cost overrun occurrence and predict its percentage for future projects. First, an intensive literature review was done to study all the factors that cause cost overrun in construction projects, then another review was done for previous researcher papers about mining process in dealing with cost overruns. Second, a proposed data warehouse was structured which can be used by organizations to store their future data in a well-organized way so it can be easily analyzed later. Third twelve quantitative factors which their data are frequently available at construction projects were selected to be the analyzed factors and suggested predictors for the proposed model.

Keywords: construction management, construction projects, cost overrun, cost performance, data mining, data warehousing, knowledge discovery, knowledge management

Procedia PDF Downloads 370
24428 Sampling Error and Its Implication for Capture Fisheries Management in Ghana

Authors: Temiloluwa J. Akinyemi, Denis W. Aheto, Wisdom Akpalu

Abstract:

Capture fisheries in developing countries provide significant animal protein and directly supports the livelihoods of several communities. However, the misperception of biophysical dynamics owing to a lack of adequate scientific data has contributed to the suboptimal management in marine capture fisheries. This is because yield and catch potentials are sensitive to the quality of catch and effort data. Yet, studies on fisheries data collection practices in developing countries are hard to find. This study investigates the data collection methods utilized by fisheries technical officers within the four fishing regions of Ghana. We found that the officers employed data collection and sampling procedures which were not consistent with the technical guidelines curated by FAO. For example, 50 instead of 166 landing sites were sampled, while 290 instead of 372 canoes were sampled. We argue that such sampling errors could result in the over-capitalization of capture fish stocks and significant losses in resource rents.

Keywords: Fisheries data quality, fisheries management, Ghana, Sustainable Fisheries

Procedia PDF Downloads 92
24427 Improvement of Data Transfer over Simple Object Access Protocol (SOAP)

Authors: Khaled Ahmed Kadouh, Kamal Ali Albashiri

Abstract:

This paper presents a designed algorithm involves improvement of transferring data over Simple Object Access Protocol (SOAP). The aim of this work is to establish whether using SOAP in exchanging XML messages has any added advantages or not. The results showed that XML messages without SOAP take longer time and consume more memory, especially with binary data.

Keywords: JAX-WS, SMTP, SOAP, web service, XML

Procedia PDF Downloads 495
24426 Enhancing Healthcare Data Protection and Security

Authors: Joseph Udofia, Isaac Olufadewa

Abstract:

Everyday, the size of Electronic Health Records data keeps increasing as new patients visit health practitioner and returning patients fulfil their appointments. As these data grow, so is their susceptibility to cyber-attacks from criminals waiting to exploit this data. In the US, the damages for cyberattacks were estimated at $8 billion (2018), $11.5 billion (2019) and $20 billion (2021). These attacks usually involve the exposure of PII. Health data is considered PII, and its exposure carry significant impact. To this end, an enhancement of Health Policy and Standards in relation to data security, especially among patients and their clinical providers, is critical to ensure ethical practices, confidentiality, and trust in the healthcare system. As Clinical accelerators and applications that contain user data are used, it is expedient to have a review and revamp of policies like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Fast Healthcare Interoperability Resources (FHIR), all aimed to ensure data protection and security in healthcare. FHIR caters for healthcare data interoperability, FHIR caters to healthcare data interoperability, as data is being shared across different systems from customers to health insurance and care providers. The astronomical cost of implementation has deterred players in the space from ensuring compliance, leading to susceptibility to data exfiltration and data loss on the security accuracy of protected health information (PHI). Though HIPAA hones in on the security accuracy of protected health information (PHI) and PCI DSS on the security of payment card data, they intersect with the shared goal of protecting sensitive information in line with industry standards. With advancements in tech and the emergence of new technology, it is necessary to revamp these policies to address the complexity and ambiguity, cost barrier, and ever-increasing threats in cyberspace. Healthcare data in the wrong hands is a recipe for disaster, and we must enhance its protection and security to protect the mental health of the current and future generations.

Keywords: cloud security, healthcare, cybersecurity, policy and standard

Procedia PDF Downloads 90