Search results for: co-gasification modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1753

Search results for: co-gasification modelling

1393 Large Eddy Simulation with Energy-Conserving Schemes: Understanding Wind Farm Aerodynamics

Authors: Dhruv Mehta, Alexander van Zuijlen, Hester Bijl

Abstract:

Large Eddy Simulation (LES) numerically resolves the large energy-containing eddies of a turbulent flow, while modelling the small dissipative eddies. On a wind farm, these large scales carry the energy wind turbines extracts and are also responsible for transporting the turbines’ wakes, which may interact with downstream turbines and certainly with the atmospheric boundary layer (ABL). In this situation, it is important to conserve the energy that these wake’s carry and which could be altered artificially through numerical dissipation brought about by the schemes used for the spatial discretisation and temporal integration. Numerical dissipation has been reported to cause the premature recovery of turbine wakes, leading to an over prediction in the power produced by wind farms.An energy-conserving scheme is free from numerical dissipation and ensures that the energy of the wakes is increased or decreased only by the action of molecular viscosity or the action of wind turbines (body forces). The aim is to create an LES package with energy-conserving schemes to simulate wind turbine wakes correctly to gain insight into power-production, wake meandering etc. Such knowledge will be useful in designing more efficient wind farms with minimal wake interaction, which if unchecked could lead to major losses in energy production per unit area of the wind farm. For their research, the authors intend to use the Energy-Conserving Navier-Stokes code developed by the Energy Research Centre of the Netherlands.

Keywords: energy-conserving schemes, modelling turbulence, Large Eddy Simulation, atmospheric boundary layer

Procedia PDF Downloads 443
1392 Dynamic Wetting and Solidification

Authors: Yulii D. Shikhmurzaev

Abstract:

The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.

Keywords: dynamic wetting, interface formation, phase transition, solidification

Procedia PDF Downloads 40
1391 Modelling Phase Transformations in Zircaloy-4 Fuel Cladding under Transient Heating Rates

Authors: Jefri Draup, Antoine Ambard, Chi-Toan Nguyen

Abstract:

Zirconium alloys exhibit solid-state phase transformations under thermal loading. These can lead to a significant evolution of the microstructure and associated mechanical properties of materials used in nuclear fuel cladding structures. Therefore, the ability to capture effects of phase transformation on the material constitutive behavior is of interest during conditions of severe transient thermal loading. Whilst typical Avrami, or Johnson-Mehl-Avrami-Kolmogorov (JMAK), type models for phase transformations have been shown to have a good correlation with the behavior of Zircaloy-4 under constant heating rates, the effects of variable and fast heating rates are not fully explored. The present study utilises the results of in-situ high energy synchrotron X-ray diffraction (SXRD) measurements in order to validate the phase transformation models for Zircaloy-4 under fast variable heating rates. These models are used to assess the performance of fuel cladding structures under loss of coolant accident (LOCA) scenarios. The results indicate that simple Avrami type models can provide a reasonable indication of the phase distribution in experimental test specimens under variable fast thermal loading. However, the accuracy of these models deteriorates under the faster heating regimes, i.e., 100Cs⁻¹. The studies highlight areas for improvement of simple Avrami type models, such as the inclusion of temperature rate dependence of the JMAK n-exponent.

Keywords: accident, fuel, modelling, zirconium

Procedia PDF Downloads 110
1390 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection

Procedia PDF Downloads 107
1389 Determination of ILSS of Composite Materials Using Micromechanical FEA Analysis

Authors: K. Rana, H.A.Saeed, S. Zahir

Abstract:

Inter Laminar Shear Stress (ILSS) is a main key parameter which quantify the properties of composite materials. These properties can ascertain the use of material for a specific purpose like aerospace, automotive etc. A modelling approach for determination of ILSS is presented in this paper. Geometric modelling of composite material is performed in TEXGEN software where reinforcement, cured matrix and their interfaces are modelled separately as per actual geometry. Mechanical properties of matrix and reinforcements are modelled separately which incorporated anisotropy in the real world composite material. ASTM D2344 is modelled in ANSYS for ILSS. In macroscopic analysis model approximates the anisotropy of the material and uses orthotropic properties by applying homogenization techniques. Shear Stress analysis in that case does not show the actual real world scenario and rather approximates it. In this paper actual geometry and properties of reinforcement and matrix are modelled to capture the actual stress state during the testing of samples as per ASTM standards. Testing of samples is also performed in order to validate the results. Fibre volume fraction of yarn is determined by image analysis of manufactured samples. Fibre volume fraction data is incorporated into the numerical model for correction of transversely isotropic properties of yarn. A comparison between experimental and simulated results is presented.

Keywords: ILSS, FEA, micromechanical, fibre volume fraction, image analysis

Procedia PDF Downloads 342
1388 The Assessment of Particulate Matter Pollution in Kaunas Districts

Authors: Audrius Dedele, Aukse Miskinyte

Abstract:

Air pollution is a major problem, especially in large cities, causing a variety of environmental issues and a risk to human health effects. In order to observe air quality, to reduce and control air pollution in the city, municipalities are responsible for the creation of air quality management plans, air quality monitoring and emission inventories. Atmospheric dispersion modelling systems, along with monitoring, are powerful tools, which can be used not only for air quality management, but for the assessment of human exposure to air pollution. These models are widely used in epidemiological studies, which try to determine the associations between exposure to air pollution and the adverse health effects. The purpose of this study was to determine the concentration of particulate matter smaller than 10 μm (PM10) in different districts of Kaunas city during winter season. ADMS-Urban dispersion model was used for the simulation of PM10 pollution. The inputs of the model were the characteristics of stationary, traffic and domestic sources, emission data, meteorology and background concentrations were entered in the model. To assess the modelled concentrations of PM10 in Kaunas districts, geographic information system (GIS) was used. More detailed analysis was made using Spatial Analyst tools. The modelling results showed that the average concentration of PM10 during winter season in Kaunas city was 24.8 µg/m3. The highest PM10 levels were determined in Zaliakalnis and Aleksotas districts with are the highest number of individual residential properties, 32.0±5.2 and 28.7±8.2 µg/m3, respectively. The lowest pollution of PM10 was modelled in Petrasiunai district (18.4 µg/m3), which is characterized as commercial and industrial neighbourhood.

Keywords: air pollution, dispersion model, GIS, Particulate matter

Procedia PDF Downloads 243
1387 Numerical Modelling of Skin Tumor Diagnostics through Dynamic Thermography

Authors: Luiz Carlos Wrobel, Matjaz Hribersek, Jure Marn, Jurij Iljaz

Abstract:

Dynamic thermography has been clinically proven to be a valuable diagnostic technique for skin tumor detection as well as for other medical applications such as breast cancer diagnostics, diagnostics of vascular diseases, fever screening, dermatological and other applications. Thermography for medical screening can be done in two different ways, observing the temperature response under steady-state conditions (passive or static thermography), and by inducing thermal stresses by cooling or heating the observed tissue and measuring the thermal response during the recovery phase (active or dynamic thermography). The numerical modelling of heat transfer phenomena in biological tissue during dynamic thermography can aid the technique by improving process parameters or by estimating unknown tissue parameters based on measured data. This paper presents a nonlinear numerical model of multilayer skin tissue containing a skin tumor, together with the thermoregulation response of the tissue during the cooling-rewarming processes of dynamic thermography. The model is based on the Pennes bioheat equation and solved numerically by using a subdomain boundary element method which treats the problem as axisymmetric. The paper includes computational tests and numerical results for Clark II and Clark IV tumors, comparing the models using constant and temperature-dependent thermophysical properties, which showed noticeable differences and highlighted the importance of using a local thermoregulation model.

Keywords: boundary element method, dynamic thermography, static thermography, skin tumor diagnostic

Procedia PDF Downloads 72
1386 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects

Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová

Abstract:

The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.

Keywords: algorithm, crisis, DYVELOP, infrastructure

Procedia PDF Downloads 383
1385 Hydrogen Induced Fatigue Crack Growth in Pipeline Steel API 5L X65: A Combined Experimental and Modelling Approach

Authors: H. M. Ferreira, H. Cockings, D. F. Gordon

Abstract:

Climate change is driving a transition in the energy sector, with low-carbon energy sources such as hydrogen (H2) emerging as an alternative to fossil fuels. However, the successful implementation of a hydrogen economy requires an expansion of hydrogen production, transportation and storage capacity. The costs associated with this transition are high but can be partly mitigated by adapting the current oil and natural gas networks, such as pipeline, an important component of the hydrogen infrastructure, to transport pure or blended hydrogen. Steel pipelines are designed to withstand fatigue, one of the most common causes of pipeline failure. However, it is well established that some materials, such as steel, can fail prematurely in service when exposed to hydrogen-rich environments. Therefore, it is imperative to evaluate how defects (e.g. inclusions, dents, and pre-existing cracks) will interact with hydrogen under cyclic loading and, ultimately, to what extent hydrogen induced failure will limit the service conditions of steel pipelines. This presentation will explore how the exposure of API 5L X65 to a hydrogen-rich environment and cyclic loads will influence its susceptibility to hydrogen induced failure. That evaluation will be performed by a combination of several techniques such as hydrogen permeation testing (ISO 17081:2014), fatigue crack growth (FCG) testing (ISO 12108:2018 and AFGROW modelling), combined with microstructural and fractographic analysis. The development of a FCG test setup coupled with an electrochemical cell will be discussed, along with the advantages and challenges of measuring crack growth rates in electrolytic hydrogen environments. A detailed assessment of several electrolytic charging conditions will also be presented, using hydrogen permeation testing as a method to correlate the different charging settings to equivalent hydrogen concentrations and effective diffusivity coefficients, not only on the base material but also on the heat affected zone and weld of the pipelines. The experimental work is being complemented with AFGROW, a useful FCG modelling software that has helped inform testing parameters and which will also be developed to ultimately help industry experts perform structural integrity analysis and remnant life characterisation of pipeline steels under representative conditions. The results from this research will allow to conclude if there is an acceleration of the crack growth rate of API 5L X65 under the influence of a hydrogen-rich environment, an important aspect that needs to be rectified instandards and codes of practice on pipeline integrity evaluation and maintenance.

Keywords: AFGROW, electrolytic hydrogen charging, fatigue crack growth, hydrogen, pipeline, steel

Procedia PDF Downloads 67
1384 A Modular and Reusable Bond Graph Model of Epithelial Transport in the Proximal Convoluted Tubule

Authors: Leyla Noroozbabaee, David Nickerson

Abstract:

We introduce a modular, consistent, reusable bond graph model of the renal nephron’s proximal convoluted tubule (PCT), which can reproduce biological behaviour. In this work, we focus on ion and volume transport in the proximal convoluted tubule of the renal nephron. Modelling complex systems requires complex modelling problems to be broken down into manageable pieces. This can be enabled by developing models of subsystems that are subsequently coupled hierarchically. Because they are based on a graph structure. In the current work, we define two modular subsystems: the resistive module representing the membrane and the capacitive module representing solution compartments. Each module is analyzed based on thermodynamic processes, and all the subsystems are reintegrated into circuit theory in network thermodynamics. The epithelial transport system we introduce in the current study consists of five transport membranes and four solution compartments. Coupled dissipations in the system occur in the membrane subsystems and coupled free-energy increasing, or decreasing processes appear in solution compartment subsystems. These structural subsystems also consist of elementary thermodynamic processes: dissipations, free-energy change, and power conversions. We provide free and open access to the Python implementation to ensure our model is accessible, enabling the reader to explore the model through setting their simulations and reproducibility tests.

Keywords: Bond Graph, Epithelial Transport, Water Transport, Mathematical Modeling

Procedia PDF Downloads 57
1383 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed

Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan

Abstract:

Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.

Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling

Procedia PDF Downloads 352
1382 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction

Authors: Lucas Peries, Rolla Monib

Abstract:

The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.

Keywords: building information modelling, modularisation, prefabrication, technology

Procedia PDF Downloads 61
1381 Towards Law Data Labelling Using Topic Modelling

Authors: Daniel Pinheiro Da Silva Junior, Aline Paes, Daniel De Oliveira, Christiano Lacerda Ghuerren, Marcio Duran

Abstract:

The Courts of Accounts are institutions responsible for overseeing and point out irregularities of Public Administration expenses. They have a high demand for processes to be analyzed, whose decisions must be grounded on severity laws. Despite the existing large amount of processes, there are several cases reporting similar subjects. Thus, previous decisions on already analyzed processes can be a precedent for current processes that refer to similar topics. Identifying similar topics is an open, yet essential task for identifying similarities between several processes. Since the actual amount of topics is considerably large, it is tedious and error-prone to identify topics using a pure manual approach. This paper presents a tool based on Machine Learning and Natural Language Processing to assists in building a labeled dataset. The tool relies on Topic Modelling with Latent Dirichlet Allocation to find the topics underlying a document followed by Jensen Shannon distance metric to generate a probability of similarity between documents pairs. Furthermore, in a case study with a corpus of decisions of the Rio de Janeiro State Court of Accounts, it was noted that data pre-processing plays an essential role in modeling relevant topics. Also, the combination of topic modeling and a calculated distance metric over document represented among generated topics has been proved useful in helping to construct a labeled base of similar and non-similar document pairs.

Keywords: courts of accounts, data labelling, document similarity, topic modeling

Procedia PDF Downloads 139
1380 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings

Authors: Stefan Peters, Phillip Roetman

Abstract:

The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.

Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor

Procedia PDF Downloads 81
1379 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling

Authors: A. K. Borah, A. K. Singh

Abstract:

In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.

Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm

Procedia PDF Downloads 494
1378 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic

Abstract:

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Keywords: carbon dioxide, electro-chemical reduction, ionic liquids, microfluidics, modelling

Procedia PDF Downloads 120
1377 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling

Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang

Abstract:

Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.

Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model

Procedia PDF Downloads 107
1376 The Essence of Culture and Religion in Creating Disaster Resilient Societies through Corporate Social Responsibility

Authors: Repaul Kanji, Rajat Agrawal

Abstract:

In this era where issues like climate change and disasters are the topics of discussion at national and international forums, it is very often that humanity questions the causative role of corporates in such events. It is beyond any doubt that rapid industrialisation and development has taken a toll in the form of climate change and even disasters, in some case. Thus, demanding to fulfill a corporate's responsibilities in the form of rescue and relief in times of disaster, rehabilitation and even mitigation and preparedness to adapt to the oncoming changes is obvious. But how can the responsibilities of the corporates be channelised to ensure all this, i.e., develop a resilient society? More than that, which factors, when emphasised upon, can lead to the holistic development of the society. To answer this query, an extensive literature review was done to identify several enablers like legislations of a nation, the role of brand and reputation, ease of doing Corporate Social Responsibility, mission and vision of an organisation, religion and culture, etc. as a tool for building disaster resilience. A questionnaire survey, interviews with experts and academicians followed by interpretive structural modelling (ISM) were used to construct a multi-hierarchy model depicting the contextual relationship among the identified enablers. The study revealed that culture and religion are the most powerful driver, which affects other enablers either directly or indirectly. Taking cognisance of the fact that an idea of separation between religion and workplace (business) resides subconsciously within the society, the study tries to interpret the outcome of the ISM through the lenses of past researches (The Integrating Box) and explores how it can be leveraged to build a resilient society.

Keywords: corporate social responsibility, interpretive structural modelling, disaster resilience and risk reduction, the integration box (TIB)

Procedia PDF Downloads 182
1375 Estimating Interdependence of Social Statuses in a Cooperative Breeding Birds through Mathematical Modelling

Authors: Sinchan Ghosh, Fahad Al Basir, Santanu Ray, Sabyasachi Bhattacharya

Abstract:

The cooperatively breeding birds have two major ranks for the sexually mature birds. The breeders mate and produce offspring while the non-breeding helpers increase the chick production rate through help in mate-finding and allo-parenting. However, the chicks also cooperate to raise their younger siblings through warming, defending and food sharing. Although, the existing literatures describes the evolution of allo-parenting in birds but do not differentiate the significance of allo-parenting in sexually immature and mature helpers separately. This study addresses the significance of both immature and mature helpers’ contribution to the total sustainable bird population in a breeding site using Blue-tailed bee-eater as a test-bed species. To serve this purpose, a mathematical model has been built considering each social status and chicks as separate but interactive compartments. Also, to observe the dynamics of each social status with changing prey abundance, a prey population has been introduced as an additional compartment. The model was analyzed for stability condition and was validated using field-data. A simulation experiment was then performed to observe the change in equilibria with a varying helping rate from both the helpers. The result from the simulation experiment suggest that the cooperative breeding population changes its population sizes significantly with a change in helping rate from the sexually immature helpers. On the other hand, the mature helpers do not contribute to the stability of the population equilibrium as much as the immature helpers.

Keywords: Blue-tailed bee eater, Altruism, Mathematical Ethology, Behavioural modelling

Procedia PDF Downloads 129
1374 The Quality of Business Relationships in the Tourism System: An Imaginary Organisation Approach

Authors: Armando Luis Vieira, Carlos Costa, Arthur Araújo

Abstract:

The tourism system is viewable as a network of relationships amongst business partners where the success of each actor will ultimately be determined by the success of the whole network. Especially since the publication of Gümmesson’s (1996) ‘theory of imaginary organisations’, which suggests that organisational effectiveness largely depends on managing relationships and sharing resources and activities, relationship quality (RQ) has been increasingly recognised as a main source of value creation and competitive advantage. However, there is still ambiguity around this topic, and managers and researchers have been recurrently reporting the need to better understand and capitalise on the quality of interactions with business partners. This research aims at testing an RQ model from a relational, imaginary organisation’s approach. Two mail surveys provide the perceptions of 725 hotel representatives about their business relationships with tour operators, and 1,224 corporate client representatives about their business relationships with hotels (21.9 % and 38.8 % response rate, respectively). The analysis contributes to enhance our understanding on the linkages between RQ and its determinants, and identifies the role of their dimensions. Structural equation modelling results highlight trust as the dominant dimension, the crucial role of commitment and satisfaction, and suggest customer orientation as complementary building block. Findings also emphasise problem solving behaviour and selling orientation as the most relevant dimensions of customer orientation. The comparison of the two ‘dyads’ deepens the discussion and enriches the suggested theoretical and managerial guidelines concerning the contribution of quality relationships to business performance.

Keywords: corporate clients, destination competitiveness, hotels, relationship quality, structural equations modelling, tour operators

Procedia PDF Downloads 362
1373 The Relationship between Land Use Factors and Feeling of Happiness at the Neighbourhood Level

Authors: M. Moeinaddini, Z. Asadi-Shekari, Z. Sultan, M. Zaly Shah

Abstract:

Happiness can be related to everything that can provide a feeling of satisfaction or pleasure. This study tries to consider the relationship between land use factors and feeling of happiness at the neighbourhood level. Land use variables (beautiful and attractive neighbourhood design, availability and quality of shopping centres, sufficient recreational spaces and facilities, and sufficient daily service centres) are used as independent variables and the happiness score is used as the dependent variable in this study. In addition to the land use variables, socio-economic factors (gender, race, marital status, employment status, education, and income) are also considered as independent variables. This study uses the Oxford happiness questionnaire to estimate happiness score of more than 300 people living in six neighbourhoods. The neighbourhoods are selected randomly from Skudai neighbourhoods in Johor, Malaysia. The land use data were obtained by adding related questions to the Oxford happiness questionnaire. The strength of the relationship in this study is found using generalised linear modelling (GLM). The findings of this research indicate that increase in happiness feeling is correlated with an increasing income, more beautiful and attractive neighbourhood design, sufficient shopping centres, recreational spaces, and daily service centres. The results show that all land use factors in this study have significant relationship with happiness but only income, among socio-economic factors, can affect happiness significantly. Therefore, land use factors can affect happiness in Skudai more than socio-economic factors.

Keywords: neighbourhood land use, neighbourhood design, happiness, socio-economic factors, generalised linear modelling

Procedia PDF Downloads 124
1372 Analyzing the Support to Fisheries in the European Union: Modelling Budgetary Transfers in Wild Fisheries

Authors: Laura Angulo, Petra Salamon, Martin Banse, Frederic Storkamp

Abstract:

Fisheries subsidies are focus on reduce management costs or deliver income benefits to fishers. In 2015, total fishery budgetary transfers in 31 OECD countries represented 35% of their total landing value. However, subsidies to fishing have adverse effects on trade and it has been claimed that they may contribute directly to overfishing. Therefore, this paper analyses to what extend fisheries subsidies may 1) influence capture production facing quotas and 2) affect price dynamics. The study uses the fish module in AGMEMOD (Agriculture Member States Modelling, details see Chantreuil et al. (2012)) which covers eight fish categories (cephalopods; crustaceans; demersal marine fish; pelagic marine fish; molluscs excl. cephalopods; other marine finfish species; freshwater and diadromous fish) for EU member states and other selected countries developed under the SUCCESS project. This model incorporates transfer payments directly linked to fisheries operational costs. As aquaculture and wild fishery are not included within the WTO Agreement on Agriculture, data on fisheries subsidies is obtained from the OECD Fisheries Support Estimates (FSE) database, which provides statistics on budgetary transfers to the fisheries sector. Since support has been moving from budgetary transfers to General Service Support Estimate the last years, subsidies in capture production may not present substantial effects. Nevertheless, they would still show the impact across countries and fish categories within the European Union.

Keywords: AGMEMOD, budgetary transfers, EU Member States, fish model, fisheries support estimate

Procedia PDF Downloads 210
1371 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 178
1370 A Comparative Time-Series Analysis and Deep Learning Projection of Innate Radon Gas Risk in Canadian and Swedish Residential Buildings

Authors: Selim M. Khan, Dustin D. Pearson, Tryggve Rönnqvist, Markus E. Nielsen, Joshua M. Taron, Aaron A. Goodarzi

Abstract:

Accumulation of radioactive radon gas in indoor air poses a serious risk to human health by increasing the lifetime risk of lung cancer and is classified by IARC as a category one carcinogen. Radon exposure risks are a function of geologic, geographic, design, and human behavioural variables and can change over time. Using time series and deep machine learning modelling, we analyzed long-term radon test outcomes as a function of building metrics from 25,489 Canadian and 38,596 Swedish residential properties constructed between 1945 to 2020. While Canadian and Swedish properties built between 1970 and 1980 are comparable (96–103 Bq/m³), innate radon risks subsequently diverge, rising in Canada and falling in Sweden such that 21st Century Canadian houses show 467% greater average radon (131 Bq/m³) relative to Swedish equivalents (28 Bq/m³). These trends are consistent across housing types and regions within each country. The introduction of energy efficiency measures within Canadian and Swedish building codes coincided with opposing radon level trajectories in each nation. Deep machine learning modelling predicts that, without intervention, average Canadian residential radon levels will increase to 176 Bq/m³ by 2050, emphasizing the importance and urgency of future building code intervention to achieve systemic radon reduction in Canada.

Keywords: radon health risk, time-series, deep machine learning, lung cancer, Canada, Sweden

Procedia PDF Downloads 52
1369 The Role of Parents on Fear Acquisition of Children in COVID-19 Pandemic

Authors: Begum Serim-Yildiz

Abstract:

The aim of this study is to examine the role of parents' emotional and behavioral reactions on fears of children in the COVID-19 pandemic considering Rachman’s Three Pathways Theory. For this purpose, a phenomenological qualitative study was conducted. Thirteen participants living with their children were utilized through criterion and snowball sampling. In semi-structured interviews parents were asked about their own and their children’s beahavioral and emotional reactions in the COVID-19 pandemic, and they were expected to give detailed information about fears of their children before and in pandemic. Firstly, parents were asked about their behavioral and emotional reactions in the COVID-19 pandemic. As behavioral reactions, precautions taken by parents to protect the rest of the family from negative physical and emotional impact of the pandemic were mentioned, while emotional reactions were defined as acquisition of negative emotions like fear, anxiety, and worry. Secondly, parents were asked about their children’s behavioral and emotional reactions. Some of the parents talked about positive behavioral changes such as gaining self-control, while some others explained negative behavioral changes like increased time spent with technological tools. In the emotional changes section, all of the parents explained at least one negative emotion. All of the parents stated that their children had COVID-19 related fears. According to parents’ expressions, fears of children in pandemic were examined in two dimensions. Fears directly related to COVID-19 were fear of virus/microbes, illness or death of someone in family and death and fears. Fears indirectly related to COVID-19 were fear of going out, sleep alone at night, separation, touching stuff outside the home, and cold. Considering existing literature and based on the findings of this study, it can be concluded that children’s modelling experiences have impact on acquisition of negative emotions, especially fear, therefore, preventive interventions involving caregivers should be provided by mental health professionals working with children.

Keywords: children’s fears, COVID-19 pandemic, modelling experiences, parents’ reactions

Procedia PDF Downloads 138
1368 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading

Authors: Jerome Joshi

Abstract:

The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.

Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus

Procedia PDF Downloads 49
1367 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach

Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis

Abstract:

The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.

Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company

Procedia PDF Downloads 84
1366 Numerical and Experimental Investigation of Fracture Mechanism in Paintings on Wood

Authors: Mohammad Jamalabadi, Noemi Zabari, Lukasz Bratasz

Abstract:

Panel paintings -complex multi-layer structures consisting of wood support and a paint layer composed of a preparatory layer of gesso, paints, and varnishes- are among the category of cultural objects most vulnerable to relative humidity fluctuations and frequently found in museum collections. The current environmental specifications in museums have been derived using the criterion of crack initiation in an undamaged, usually new gesso layer laid on wood. In reality, historical paintings exhibit complex crack patterns called craquelures. The present paper analyses the structural response of a paint layer with a virtual network of rectangular cracks under environmental loadings using a three-dimensional model of a panel painting. Two modes of loading are considered -one induced by one-dimensional moisture response of wood support, termed the tangential loading, and the other isotropic induced by drying shrinkage of the gesso layer. The superposition of the two modes is also analysed. The modelling showed that minimum distances between cracks parallel to the wood grain depended on the gesso stiffness under the tangential loading. In spite of a non-zero Poisson’s ratio, gesso cracks perpendicular to the wood grain could not be generated by the moisture response of wood support. The isotropic drying shrinkage of gesso produced cracks that were almost evenly spaced in both directions. The modelling results were cross-checked with crack patterns obtained on a mock-up of a panel painting exposed to a number of extreme environmental variations in an environmental chamber.

Keywords: fracture saturation, surface cracking, paintings on wood, wood panels

Procedia PDF Downloads 229
1365 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors

Authors: Saeed Vahedikamal, Ian Hepburn

Abstract:

Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.

Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID

Procedia PDF Downloads 64
1364 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 268