Search results for: stream computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1589

Search results for: stream computing

1019 Steady and Oscillatory States of Swirling Flows under an Axial Magnetic Field

Authors: Brahim Mahfoud, Rachid Bessaïh

Abstract:

In this paper, a numerical study of steady and oscillatory flows with heat transfer submitted to an axial magnetic field is studied. The governing Navier-Stokes, energy, and potential equations along with appropriate boundary conditions are solved by using the finite-volume method. The flow and temperature fields are presented by stream function and isotherms, respectively. The flow between counter-rotating end disks is very unstable and reveals a great richness of structures. The results are presented for various values of the Hartmann number, Ha=5, 10, 20, and 30, and Richardson numbers , Ri=0, 0.5, 1, 2, and 4, in order to see their effects on the value of the critical Reynolds number, Recr. Stability diagrams are established according to the numerical results of this investigation. These diagrams put in evidence the dependence of Recr with the increase of Ha for various values of Ri.

Keywords: swirling, counter-rotating end disks, magnetic field, oscillatory, cylinder

Procedia PDF Downloads 309
1018 Propylene Self-Metathesis to Ethylene and Butene over WOx/SiO2, Effect of Nano-Sized Extra Supports (SiO2 and TiO2)

Authors: Adisak Guntida

Abstract:

Propylene self-metathesis to ethylene and butene was studied over WOx/SiO2 catalysts at 450 °C and atmospheric pressure. The WOx/SiO2 catalysts were prepared by incipient wetness impregnation of ammonium metatungstate aqueous solution. It was found that, adding nano-sized extra supports (SiO2 and TiO2) by physical mixing with the WOx/SiO2 enhanced propylene conversion. The UV-Vis and FT-Raman results revealed that WOx could migrate from the original silica support to the extra support, leading to a better dispersion of WOx. The ICP-OES results also indicate that WOx existed on the extra support. Coke formation was investigated on the catalysts after 10 h time-on-stream by TPO. However, adding nano-sized extra supports led to higher coke formation which may be related to acidity as characterized by NH3-TPD.

Keywords: extra support, nanomaterial, propylene self-metathesis, tungsten oxide

Procedia PDF Downloads 231
1017 Efficient Utilization of Unmanned Aerial Vehicle (UAV) for Fishing through Surveillance for Fishermen

Authors: T. Ahilan, V. Aswin Adityan, S. Kailash

Abstract:

UAV’s are small remote operated or automated aerial surveillance systems without a human pilot aboard. UAV’s generally finds its use in military and special operation application, a recent growing trend in UAV’s finds its application in several civil and non military works such as inspection of power or pipelines. The objective of this paper is the augmentation of a UAV in order to replace the existing expensive sonar (sound navigation and ranging) based equipment amongst small scale fisherman, for whom access to sonar equipment are restricted due to limited economic resources. The surveillance equipment’s present in the UAV will relay data and GPS location onto a receiver on the fishing boat using RF signals, using which the location of the schools of fishes can be found. In addition to this, an emergency beacon system is present for rescue operations and drone recovery.

Keywords: UAV, Surveillance, RF signals, fishing, sonar, GPS, video stream, school of fish

Procedia PDF Downloads 439
1016 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification

Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti

Abstract:

Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.

Keywords: fluvial auto-classification concept, mapping, geomorphology, river

Procedia PDF Downloads 351
1015 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 92
1014 Effects of a Cooler on the Sampling Process in a Continuous Emission Monitoring System

Authors: J. W. Ahn, I. Y. Choi, T. V. Dinh, J. C. Kim

Abstract:

A cooler has been widely employed in the extractive system of the continuous emission monitoring system (CEMS) to remove water vapor in the gas stream. The effect of the cooler on analytical target gases was investigated in this research. A commercial cooler for the CEMS operated at 4 C was used. Several gases emitted from a coal power plant (i.e. CO2, SO2, NO, NO2 and CO) were mixed with humid air, and then introduced into the cooler to observe its effect. Concentrations of SO2, NO, NO2 and CO were made as 200 ppm. The CO2 concentration was 8%. The inlet absolute humidity was produced as 12.5% at 100 C using a bubbling method. It was found that the reduction rate of SO2 was the highest (~21%), followed by NO2 (~17%), CO2 (~11%) and CO (~10%). In contrast, the cooler was not affected by NO gas. The result indicated that the cooler caused a significant effect on the water soluble gases due to condensate water in the cooler. To overcome this problem, a correction factor may be applied. However, water vapor might be different, and emissions of target gases are also various. Therefore, the correction factor is not only a solution, but also a better available method should be employed.

Keywords: cooler, CEMS, monitoring, reproductive, sampling

Procedia PDF Downloads 341
1013 Multi-Layer Silica Alumina Membrane Performance for Flue Gas Separation

Authors: Ngozi Nwogu, Mohammed Kajama, Emmanuel Anyanwu, Edward Gobina

Abstract:

With the objective to create technologically advanced materials to be scientifically applicable, multi-layer silica alumina membranes were molecularly fabricated by continuous surface coating silica layers containing hybrid material onto a ceramic porous substrate for flue gas separation applications. The multi-layer silica alumina membrane was prepared by dip coating technique before further drying in an oven at elevated temperature. The effects of substrate physical appearance, coating quantity, the cross-linking agent, a number of coatings and testing conditions on the gas separation performance of the membrane have been investigated. Scanning electron microscope was used to investigate the development of coating thickness. The membrane shows impressive perm selectivity especially for CO2 and N2 binary mixture representing a stimulated flue gas stream

Keywords: gas separation, silica membrane, separation factor, membrane layer thickness

Procedia PDF Downloads 393
1012 Study of Heat Transfer by Natural Convection in Overhead Storage Tank of LNG

Authors: Hariti Rafika, Fekih Malika, Saighi Mohamed

Abstract:

During the period storage of liquefied natural gas, stability is necessarily affected by natural convection along the walls of the tank with thermal insulation is not perfectly efficient. In this paper, we present the numerical simulation of heat transfert by natural convection double diffusion,in unsteady laminar regime in a storage tank. The storage tank contains a liquefied natural gas (LNG) in its gaseous phase. Fluent, a commercial CFD package, based on the numerical finite volume method, is used to simulate the flow. The gas is just on the surface of the liquid phase. This numerical simulation allowed us to determine the temperature profiles, the stream function, the velocity vectors and the variation of the heat flux density in the vapor phase in the LNG storage tank volume. The results obtained for a general configuration, by numerical simulation were compared to those found in the literature.

Keywords: numerical simulation, natural convection, heat gains, storage tank, liquefied natural gas

Procedia PDF Downloads 460
1011 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 40
1010 The Influence of Variable Geometrical Modifications of the Trailing Edge of Supercritical Airfoil on the Characteristics of Aerodynamics

Authors: P. Lauk, K. E. Seegel, T. Tähemaa

Abstract:

The fuel consumption of modern, high wing loading, commercial aircraft in the first stage of flight is high because the usable flight level is lower and the weather conditions (jet stream) have great impact on aircraft performance. To reduce the fuel consumption, it is necessary to raise during first stage of flight the L/D ratio value within Cl 0.55-0.65. Different variable geometrical wing trailing edge modifications of SC(2)-410 airfoil were compared at M 0.78 using the CFD software STAR-CCM+ simulation based Reynolds-averaged Navier-Stokes (RANS) equations. The numerical results obtained show that by increasing the width of the airfoil by 4% and by modifying the trailing edge airfoil, it is possible to decrease airfoil drag at Cl 0.70 for up to 26.6% and at the same time to increase commercial aircraft L/D ratio for up to 5.0%. Fuel consumption can be reduced in proportion to the increase in L/D ratio.

Keywords: L/D ratio, miniflaps, mini-TED, supercritical airfoil

Procedia PDF Downloads 182
1009 Application of Lean Manufacturing in Brake Shoe Manufacturing Plant: A Case Study

Authors: Anees K. Ahamed, Aakash Kumar R. G., Raj M. Mohan

Abstract:

The main objective is to apply lean tools to identify and eliminate waste in and among the work stations so as to improve the process speed and quality. From the top seven wastes in the lean concept, we consider the movement of materials, defects, and inventory for the improvement since these cause the major impact on the performance measures. The layout was improved to reduce the movement of materials. It also quantifies the reduction in movement among the work stations. Value stream mapping has been used for identification of waste. Cause and effect diagram and 5W analysis are used to identify the reasons for defects and to provide the counter measures. Some cycle time reduction techniques also proposed to improve the productivity. Lean Audit check sheet was also used to identify the current position of the industry and to identify the gap to make the industry Lean.

Keywords: cause and effect diagram, cycle time reduction, defects, lean, waste reduction

Procedia PDF Downloads 364
1008 Effect of Nanoparticle Diameter of Nano-Fluid on Average Nusselt Number in the Chamber

Authors: A. Ghafouri, N. Pourmahmoud, I. Mirzaee

Abstract:

In this numerical study, effects of using Al2O3-water nanofluid on the rate of heat transfer have been investigated numerically. The physical model is a square enclosure with insulated top and bottom horizontal walls while the vertical walls are kept at different constant temperatures. Two appropriate models are used to evaluate the viscosity and thermal conductivity of nanofluid. The governing stream-vorticity equations are solved using a second order central finite difference scheme, coupled to the conservation of mass and energy. The study has been carried out for the nanoparticle diameter 30, 60, and 90 nm and the solid volume fraction 0 to 0.04. Results are presented by average Nusselt number and normalized Nusselt number in the different range of φ and D for mixed convection dominated regime. It is found that different heat transfer rate is predicted when the effect of nanoparticle diameter is taken into account.

Keywords: nanofluid, nanoparticle diameter, heat transfer enhancement, square enclosure, Nusselt number

Procedia PDF Downloads 382
1007 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 77
1006 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm

Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy

Abstract:

IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.

Keywords: IoT, fog networks, data stewardship, dynamic access policy

Procedia PDF Downloads 36
1005 Effect of Mesh Size on the Supersonic Viscous Flow Parameters around an Axisymmetric Blunt Body

Authors: Haoui Rabah

Abstract:

The aim of this work is to analyze a viscous flow around the axisymmetric blunt body taken into account the mesh size both in the free stream and into the boundary layer. The resolution of the Navier-Stokes equations is realized by using the finite volume method to determine the flow parameters and detached shock position. The numerical technique uses the Flux Vector Splitting method of Van Leer. Here, adequate time stepping parameter, CFL coefficient and mesh size level are selected to ensure numerical convergence. The effect of the mesh size is significant on the shear stress and velocity profile. The best solution is obtained with using a very fine grid. This study enabled us to confirm that the determination of boundary layer thickness can be obtained only if the size of the mesh is lower than a certain value limits given by our calculations.

Keywords: supersonic flow, viscous flow, finite volume, blunt body

Procedia PDF Downloads 590
1004 Heavy Metals in the Water of Lakes in the 'Bory Tucholskie' National Park of Biosphere Reserve

Authors: Krzysztof Gwozdzinski, Janusz Mazur

Abstract:

Bory Tucholskie (Tucholskie Forest) is one of the largest pine forest complexes in Poland. It occupies approx. 3,000 square kilometers of Sandr in the Brda and Wda basin and the Tuchola Plain and the Charzykowskie Plain. Since 2010 it has transformed into The Bory Tucholskie Biosphere Reserve, according to the UNESCO decision. The area of the Bory Tucholskie National Park (BTNP), the park area, has been designated in 1996. There is little data on the presence of heavy metals in the Park's lakes. Concentration of heavy metals in the water of 19 lakes in the BTNP was examined. The lakes were divided into two groups: subglacial channel lakes of Struga Siedmiu Jezior (the Seven Lakes Stream) and other lakes. Heavy metals (transition metals) belong to d-block of elements. The part of these metals plays an important role in the function of living organisms as metalloproteins (enzymes, hemoproteins, vitamins, etc.). However, heavy metals are also typical; heavy metals are typical anthropogenic pollutants. Water samples were collected at the deepest points of lakes during spring and during summer stagnation. The analysis of metals was performed in an atomic absorption spectrophotometer Varian Spectra A300/400 in electric atomizer (GTA 96) in graphite cuvette. In the waters of the Seven Lakes Stream (Ostrowite, Zielone, Jelen, Belczak, Glowka, Plesno, Skrzynka, Mielnica) the increase in the concentration of the manganese and iron from outflow to inflow of Charzykowskie lake was found, while the concentration of copper (approx. 4 μg dm⁻³) and cadmium ( < 0.5 μg dm⁻³) was similar in all lakes. The concentration of the lead also varied within 2.1-3.6 μg dm⁻³. The concentration of nickel was approx. 3-fold higher in Ostrowite lake than other lakes of Struga. In turn the waters of the lakes Ostrowite, Jelen and Belczak were rich in zinc. The lowest level of heavy metals was observed in Zielone lake. In the second group of lakes, i.e., Krzywce Wielkie and Krzywce Male the heavy metal concentrations were lower than in the waters of Struga but higher than in oligotrophic lakes, i.e., Nierybno, Gluche, Kociol, Gacno Wielkie, Gacno Mae, Dlugie, Zabionek, and Sosnowek. The concentration of cadmium was below 0.5 μg dm⁻³ in all the studied lakes from this group. In the group of oligotrophic lakes the highest concentrations of metals such as manganese, iron, zinc and nickel in Gacno Male and Gacno Wielkie were observed. The high level of manganese in Sosnowek and Gacno Wielkie lakes was found. The lead level was also high in Nierybno lake and nickel in Gacno Wielkie lake. The lower level of heavy metals was in oligotrophic lakes such as Kociol, Dlugie, Zabionek and α-mesotrophic lake, Krzywce Wielkie. Generally, the level of heavy metals in studied lakes situated in Bory Tucholskie National Park was lower than in other lakes of Bory Tucholskie Biosphere Reserve.

Keywords: Bory Tucholskie Biosphere Reserve, Bory Tucholskie National Park, heavy metals, lakes

Procedia PDF Downloads 100
1003 Application of Lean Six Sigma Tools to Minimize Time and Cost in Furniture Packaging

Authors: Suleiman Obeidat, Nabeel Mandahawi

Abstract:

In this work, the packaging process for a move is improved. The customers of this move need their household stuff to be moved from their current house to the new one with minimum damage, in an organized manner, on time and with the minimum cost. Our goal was to improve the process between 10% and 20% time efficiency, 90% reduction in damaged parts and an acceptable improvement in the cost of the total move process. The expected ROI was 833%. Many improvement techniques have been used in terms of the way the boxes are prepared, their preparation cost, packing the goods, labeling them and moving them to a place for moving out. DMAIC technique is used in this work: SIPOC diagram, value stream map of “As Is” process, Root Cause Analysis, Maps of “Future State” and “Ideal State” and an Improvement Plan. A value of ROI=624% is obtained which is lower than the expected value of 833%. The work explains the techniques of improvement and the deficiencies in the old process.

Keywords: packaging, lean tools, six sigma, DMAIC methodology, SIPOC

Procedia PDF Downloads 411
1002 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia

Authors: Suzana Ramli, Wardah Tahir

Abstract:

Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.

Keywords: surface runoff, geographic information system, curve number method, environment

Procedia PDF Downloads 261
1001 Effect of Electromagnetic Field on Capacitive Deionization Performance

Authors: Alibi Kilybay, Emad Alhseinat, Ibrahim Mustafa, Abdulfahim Arangadi, Pei Shui, Faisal Almarzooqi

Abstract:

In this work, the electromagnetic field has been used for improving the performance of the capacitive deionization process. The effect of electromagnetic fields on the efficiency of the capacitive deionization (CDI) process was investigated experimentally. The results showed that treating the feed stream of the CDI process using an electromagnetic field can enhance the electrosorption capacity from 20% up to 70%. The effect of the degree of time of exposure, concentration, and type of ions have been examined. The electromagnetic field enhanced the salt adsorption capacity (SAC) of the Ca²⁺ ions by 70%, while the SAC enhanced 20% to the Na⁺ ions. It is hypnotized that the electrometric field affects the hydration shell around the ions and thus reduces their effective size and enhances the mass transfer. This reduction in ion effective size and increase in mass transfer enhanced the electrosorption capacity and kinetics of the CDI process.

Keywords: capacitive deionization, desalination, electromagnetic treatment, water treatment

Procedia PDF Downloads 239
1000 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 110
999 Local Homology Modules

Authors: Fatemeh Mohammadi Aghjeh Mashhad

Abstract:

In this paper, we give several ways for computing generalized local homology modules by using Gorenstein flat resolutions. Also, we find some bounds for vanishing of generalized local homology modules.

Keywords: a-adic completion functor, generalized local homology modules, Gorenstein flat modules

Procedia PDF Downloads 394
998 Heat Transfer and Diffusion Modelling

Authors: R. Whalley

Abstract:

The heat transfer modelling for a diffusion process will be considered. Difficulties in computing the time-distance dynamics of the representation will be addressed. Incomplete and irrational Laplace function will be identified as the computational issue. Alternative approaches to the response evaluation process will be provided. An illustration application problem will be presented. Graphical results confirming the theoretical procedures employed will be provided.

Keywords: heat, transfer, diffusion, modelling, computation

Procedia PDF Downloads 535
997 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 52
996 Chemical and Vibrational Nonequilibrium Hypersonic Viscous Flow around an Axisymmetric Blunt Body

Authors: Rabah Haoui

Abstract:

Hypersonic flows around spatial vehicles during their reentry phase in planetary atmospheres are characterized by intense aerothermodynamics phenomena. The aim of this work is to analyze high temperature flows around an axisymmetric blunt body taking into account chemical and vibrational non-equilibrium for air mixture species and the no slip condition at the wall. For this purpose, the Navier-Stokes equations system is resolved by the finite volume methodology to determine the flow parameters around the axisymmetric blunt body especially at the stagnation point and in the boundary layer along the wall of the blunt body. The code allows the capture of shock wave before a blunt body placed in hypersonic free stream. The numerical technique uses the Flux Vector Splitting method of Van Leer. CFL coefficient and mesh size level are selected to ensure the numerical convergence.

Keywords: hypersonic flow, viscous flow, chemical kinetic, dissociation, finite volumes, frozen and non-equilibrium flow

Procedia PDF Downloads 443
995 Heating System for Water Pool by Solar Energy

Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale

Abstract:

A swimming pool heating system is presented, composed of two alternative collectors with serial PVC absorber tubes that work in regimen of forced stream that is gotten through a bomb. A 500 liters reservoir was used, simulating the swimming pool, being raised some data that show the viability of the considered system. The chosen outflow was corresponding to 100 l/h. In function of the low outflow it was necessary the use of a not popular bomb, choosing the use of a low outflow alternative pumping system, using an air conditioner engine with three different rotations for the desired end. The thermal data related to each collector and their developed system will be presented. The UV and thermal degradations of the PVC exposed to solar radiation will be also boarded, demonstrating the viability of using tubes of this material as absorber elements of radiation in water heating solar collectors.

Keywords: solar energy, solar swimming pool, water heating, PVC tubes, alternative system

Procedia PDF Downloads 446
994 Evaluation of Water Quality for the Kurtbogazi Dam Outlet and the Streams Feeding the Dam (Ankara, Turkey)

Authors: Gulsen Tozsin, Fatma Bakir, Cemil Acar, Ercument Koc

Abstract:

Kurtbogazi Dam has gained special meaning for Ankara, Turkey for the last decade due to the rapid depletion of nearby resources of drinking water. In this study, the results of the analyses of Kurtbogazi Dam outlet water and the rivers flowing into the Kurtbogazi Dam were discussed for the period of last five years between 2008 and 2012. The quality of these surface water resources were evaluated in terms of pH, temperature, biochemical oxygen demand (BOD5), nitrate, phosphate and chlorine. They were classified according to the Council Directive (75/440/EEC). Moreover, the properties of these surface waters were assessed to determine the quality of water for drinking and irrigation purposes using Piper, US Salinity Laboratory and Wilcox diagrams. The results revealed that the quality of all the investigated water sources are generally at satisfactory level as surface water except for Pazar Stream in terms of ortho-phosphate and BOD5 concentration for 2008.

Keywords: Kurtbogazi dam, water quality assessment, Ankara water, water supply

Procedia PDF Downloads 356
993 Real-Time Image Encryption Using a 3D Discrete Dual Chaotic Cipher

Authors: M. F. Haroun, T. A. Gulliver

Abstract:

In this paper, an encryption algorithm is proposed for real-time image encryption. The scheme employs a dual chaotic generator based on a three dimensional (3D) discrete Lorenz attractor. Encryption is achieved using non-autonomous modulation where the data is injected into the dynamics of the master chaotic generator. The second generator is used to permute the dynamics of the master generator using the same approach. Since the data stream can be regarded as a random source, the resulting permutations of the generator dynamics greatly increase the security of the transmitted signal. In addition, a technique is proposed to mitigate the error propagation due to the finite precision arithmetic of digital hardware. In particular, truncation and rounding errors are eliminated by employing an integer representation of the data which can easily be implemented. The simple hardware architecture of the algorithm makes it suitable for secure real-time applications.

Keywords: chaotic systems, image encryption, non-autonomous modulation, FPGA

Procedia PDF Downloads 489
992 Developing Digital Competencies in Aboriginal Students through University-College Partnerships

Authors: W. S. Barber, S. L. King

Abstract:

This paper reports on a pilot project to develop a collaborative partnership between a community college in rural northern Ontario, Canada, and an urban university in the greater Toronto area in Oshawa, Canada. Partner institutions will collaborate to address learning needs of university applicants whose goals are to attain an undergraduate university BA in Educational Studies and Digital Technology degree, but who may not live in a geographical location that would facilitate this pathways process. The UOIT BA degree is attained through a 2+2 program, where students with a 2 year college diploma or equivalent can attain a four year undergraduate degree. The goals reported on the project are as: 1. Our aim is to expand the BA program to include an additional stream which includes serious educational games, simulations and virtual environments, 2. Develop fully (using both synchronous and asynchronous technologies) online learning modules for use by university applicants who otherwise are not geographically located close to a physical university site, 3. Assess the digital competencies of all students, including members of local, distance and Indigenous communities using a validated tool developed and tested by UOIT across numerous populations. This tool, the General Technical Competency Use and Scale (GTCU) will provide the collaborating institutions with data that will allow for analyzing how well students are prepared to succeed in fully online learning communities. Philosophically, the UOIT BA program is based on a fully online learning communities model (FOLC) that can be accessed from anywhere in the world through digital learning environments via audio video conferencing tools such as Adobe Connect. It also follows models of adult learning and mobile learning, and makes a university degree accessible to the increasing demographic of adult learners who may use mobile devices to learn anywhere anytime. The program is based on key principles of Problem Based Learning, allowing students to build their own understandings through the co-design of the learning environment in collaboration with the instructors and their peers. In this way, this degree allows students to personalize and individualize the learning based on their own culture, background and professional/personal experiences. Using modified flipped classroom strategies, students are able to interrogate video modules on their own time in preparation for one hour discussions occurring in video conferencing sessions. As a consequence of the program flexibility, students may continue to work full or part time. All of the partner institutions will co-develop four new modules, administer the GTCU and share data, while creating a new stream of the UOIT BA degree. This will increase accessibility for students to bridge from community colleges to university through a fully digital environment. We aim to work collaboratively with Indigenous elders, community members and distance education instructors to increase opportunities for more students to attain a university education.

Keywords: aboriginal, college, competencies, digital, universities

Procedia PDF Downloads 204
991 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 167
990 Peristaltic Transport of a Jeffrey Fluid with Double-Diffusive Convection in Nanofluids in the Presence of Inclined Magnetic Field

Authors: Safia Akram

Abstract:

In this article, the effects of peristaltic transport with double-diffusive convection in nanofluids through an asymmetric channel with different waveforms is presented. Mathematical modelling for two-dimensional and two directional flows of a Jeffrey fluid model along with double-diffusive convection in nanofluids are given. Exact solutions are obtained for nanoparticle fraction field, concentration field, temperature field, stream functions, pressure gradient and pressure rise in terms of axial and transverse coordinates under the restrictions of long wavelength and low Reynolds number. With the help of computational and graphical results the effects of Brownian motion, thermophoresis, Dufour, Soret, and Grashof numbers (thermal, concentration, nanoparticles) on peristaltic flow patterns with double-diffusive convection are discussed.

Keywords: nanofluid particles, peristaltic flow, Jeffrey fluid, magnetic field, asymmetric channel, different waveforms

Procedia PDF Downloads 361