Search results for: grid code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2330

Search results for: grid code

1550 The Factors Constitute the Interaction between Teachers and Students: An Empirical Study at the Notion of Framing

Authors: Tien-Hui Chiang

Abstract:

The code theory, proposed by Basil Bernstein, indicates that framing can be viewed as the core element in constituting the phenomenon of cultural reproduction because it is able to regulate the transmission of pedagogical information. Strong framing increases the social relation boundary between a teacher and pupils, which obstructs information transmission, so that in order to improve underachieving students’ academic performances, teachers need to reduce to strength of framing. Weak framing enables them to transform academic knowledge into commonsense knowledge in daily life language. This study posits that most teachers would deliver strong framing due to their belief mainly confined within the aspect of instrumental rationality that deprives their critical minds. This situation could make them view the normal distribution bell curve of students’ academic performances as a natural outcome. In order to examine the interplay between framing, instrumental rationality and pedagogical action, questionnaires were completed by over 5,000 primary school teachers in Henan province, China, who were stratified sample. The statistical results show that most teachers employed psychological concepts to measure students’ academic performances and, in turn, educational inequity was legitimatized as a natural outcome in the efficiency-led approach. Such efficiency-led minds made them perform as the agent practicing the mechanism of social control and in turn sustaining the phenomenon of cultural reproduction.

Keywords: code, cultural reproduction, framing, instrumental rationality, social relation and interaction

Procedia PDF Downloads 147
1549 The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence

Authors: Gergely G. Karacsony

Abstract:

Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.

Keywords: human rights, robot ethics, artificial intelligence and law, human-robot interaction

Procedia PDF Downloads 236
1548 Stability of Concrete Moment Resisting Frames in View of Current Codes Requirements

Authors: Mahmoud A. Mahmoud, Ashraf Osman

Abstract:

In this study, the different approaches currently followed by design codes to assess the stability of buildings utilizing concrete moment resisting frames structural system are evaluated. For such purpose, a parametric study was performed. It involved analyzing group of concrete moment resisting frames having different slenderness ratios (height/width ratios), designed for different lateral loads to vertical loads ratios and constructed using ordinary reinforced concrete and high strength concrete for stability check and overall buckling using code approaches and computer buckling analysis. The objectives were to examine the influence of such parameters that directly linked to frames’ lateral stiffness on the buildings’ stability and evaluates the code approach in view of buckling analysis results. Based on this study, it was concluded that, the most susceptible buildings to instability and magnification of second order effects are buildings having high aspect ratios (height/width ratio), having low lateral to vertical loads ratio and utilizing construction materials of high strength. In addition, the study showed that the instability limits imposed by codes are mainly mathematical to ensure reliable analysis not a physical ones and that they are in general conservative. Also, it has been shown that the upper limit set by one of the codes that second order moment for structural elements should be limited to 1.4 the first order moment is not justified, instead, the overall story check is more reliable.

Keywords: buckling, lateral stability, p-delta, second order

Procedia PDF Downloads 253
1547 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 83
1546 A Framework for Secure Information Flow Analysis in Web Applications

Authors: Ralph Adaimy, Wassim El-Hajj, Ghassen Ben Brahim, Hazem Hajj, Haidar Safa

Abstract:

Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company’s financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data’s confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task.

Keywords: web applications security, secure information flow, program dependence graph, database annotation

Procedia PDF Downloads 468
1545 Role of Energy Storage in Renewable Electricity Systems in The Gird of Ethiopia

Authors: Dawit Abay Tesfamariam

Abstract:

Ethiopia’s Climate- Resilient Green Economy (ECRGE) strategy focuses mainly on generating and proper utilization of renewable energy (RE). Nonetheless, the current electricity generation of the country is dominated by hydropower. The data collected in 2016 by Ethiopian Electric Power (EEP) indicates that the intermittent RE sources from solar and wind energy were only 8 %. On the other hand, the EEP electricity generation plan in 2030 indicates that 36.1 % of the energy generation share will be covered by solar and wind sources. Thus, a case study was initiated to model and compute the balance and consumption of electricity in three different scenarios: 2016, 2025, and 2030 using the EnergyPLAN Model (EPM). Initially, the model was validated using the 2016 annual power-generated data to conduct the EnergyPLAN (EP) analysis for two predictive scenarios. The EP simulation analysis using EPM for 2016 showed that there was no significant excess power generated. Thus, the EPM was applied to analyze the role of energy storage in RE in Ethiopian grid systems. The results of the EP simulation analysis showed there will be excess production of 402 /7963 MW average and maximum, respectively, in 2025. The excess power was in the three rainy months of the year (June, July, and August). The outcome of the model also showed that in the dry seasons of the year, there would be excess power production in the country. Consequently, based on the validated outcomes of EP indicates, there is a good reason to think about other alternatives for the utilization of excess energy and storage of RE. Thus, from the scenarios and model results obtained, it is realistic to infer that if the excess power is utilized with a storage system, it can stabilize the grid system and be exported to support the economy. Therefore, researchers must continue to upgrade the current and upcoming storage system to synchronize with potentials that can be generated from renewable energy.

Keywords: renewable energy, power, storage, wind, energy plan

Procedia PDF Downloads 75
1544 Evaluation of Prestressed Reinforced Concrete Slab Punching Shear Using Finite Element Method

Authors: Zhi Zhang, Liling Cao, Seyedbabak Momenzadeh, Lisa Davey

Abstract:

Reinforced concrete (RC) flat slab-column systems are commonly used in residential or office buildings, as the flat slab provides efficient clearance resulting in more stories at a given height than regular reinforced concrete beam-slab system. Punching shear of slab-column joints is a critical component of two-way reinforced concrete flat slab design. The unbalanced moment at the joint is transferred via slab moment and shear forces. ACI 318 provides an equation to evaluate the punching shear under the design load. It is important to note that the design code considers gravity and environmental load when considering the design load combinations, while it does not consider the effect from differential foundation settlement, which may be a governing load condition for the slab design. This paper describes how prestressed reinforced concrete slab punching shear is evaluated based on ACI 318 provisions and finite element analysis. A prestressed reinforced concrete slab under differential settlements is studied using the finite element modeling methodology. The punching shear check equation is explained. The methodology to extract data for punching shear check from the finite element model is described and correlated with the corresponding code provisions. The study indicates that the finite element analysis results should be carefully reviewed and processed in order to perform accurate punching shear evaluation. Conclusions are made based on the case studies to help engineers understand the punching shear behavior in prestressed and non-prestressed reinforced concrete slabs.

Keywords: differential settlement, finite element model, prestressed reinforced concrete slab, punching shear

Procedia PDF Downloads 125
1543 The Construction of the Bridge between Mrs Dalloway and to the Lighthouse: The Combination of Codes and Metaphors in the Structuring of the Plot in the Work of Virginia Woolf

Authors: María Rosa Mucci

Abstract:

Tzvetan Todorov (1971) designs a model of narrative transformation where the plot is constituted by difference and resemblance. This binary opposition is a synthesis of a central figure within narrative discourse: metaphor. Narrative operates as a metaphor since it combines different actions through similarities within a common plot. However, it sounds paradoxical that metonymy and not metaphor should be the key figure within the narrative. It is a metonymy that keeps the movement of actions within the story through syntagmatic relations. By the same token, this articulation of verbs makes it possible for the reader to engage in a dynamic interaction with the text, responding to the plot and mediating meanings with the contradictory external world. As Roland Barthes (1957) points out, there are two codes that are irreversible within the process: the codes of actions and the codes of enigmas. Virginia Woolf constructs her plots through a process of symbolism; a scene is always enduring, not only because it stands for something else but also because it connotes it. The reader is forced to elaborate the meaning at a mythological level beyond the lines. In this research, we follow a qualitative content analysis to code language through the proairetic (actions) and hermeneutic (enigmas) codes in terms of Barthes. There are two novels in particular that engage the reader in this process of construction: Mrs Dalloway (1925) and To the Lighthouse (1927). The bridge from the first to the second brings memories of childhood, allowing for the discovery of these enigmas hidden between the lines. What survives? Who survives? It is the reader's task to unravel these codes and rethink this dialogue between plot and reader to contribute to the predominance of texts and the textuality of narratives.

Keywords: metonymy, code, metaphor, myth, textuality

Procedia PDF Downloads 54
1542 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions

Authors: Gaurangi Saxena, Ravindra Saxena

Abstract:

Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.

Keywords: cloud computing, competitive advantage, customer relationship management, grid computing

Procedia PDF Downloads 307
1541 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method

Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya

Abstract:

This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.

Keywords: particle size reduction, micromixer, FDM modelling, wet etching

Procedia PDF Downloads 425
1540 Determining Factors of Suspended Glass Systems with Pre-Stress Cable Truss

Authors: Cemil Atakara, Hüseyin Eryaman

Abstract:

The use of glass as an envelope of a building has been increasing in the twentieth century. For more transparency and dematerialization new glass facade types have emerged in the past two decades which depends on point fixed glazing system (PFGS). The aim of this study is to analyze of the PFGS systems which are used on the glass curtain wall according to their types, degree, architectural and structural effects. This new system is desired because it enhances the transparency of the façade and it minimizes the component of the frames or of the profiles. This PFGS led to new structural elements which use cables, rods, trusses when designing a glass building facades, this structural element called the suspended glass system with pre-stressed cable truss (SGSPCT) which has been used for the first time in 1980 in Serres building. The twenty glass buildings which are designed in different systems have been analyzed during this study. After these analyses five selected SGSPCT building analyzed deeply and one skeletal frame building selected from Lefkosa redesigned according to the analysis results. These selected buildings have been included of various cable-truss system typologies and degree. The methodology of this study is building analysis method and literature survey with the help of books, articles, magazines, drawings, internet sources and applied connection details of the glass buildings. The selected five glass buildings and case building have been detailed analyzed with their architectural drawings, photographs and details. A gridshell structure can be compared with a shell structure; it consists of discrete members connecting nodal points. As these nodal points lie on the surface of an imaginary shell, their shapes function almost identically. Difference between shell and gridshell structures can be found in the fact that, due to their free-form and thus, due to the presence of bending forces, gridshells are required to resist loading through their cross-section. This research is divided into parts. A general study about the glass building and cable-glass and grid shell system will be done in the first chapters. Structural analyses and detailed analyses with schematic drawings with the plans, sections of the selected buildings will be explained in the second part. The third part it consists of the advantages and disadvantages of the use of the SGSPCT and Grid Shell in architecture. The study consists of four chapters including the introduction chapter. The general information of the SGSPCT and glazing system has been mentioned in the first chapter. Structural features, typologies, transparency principle and analytical information on systems have been explained of the selected buildings in the second chapter. The detailed analyses of case building have been done according to their schematic drawings with the plans, sections in the third chapter. After third chapter SGSPCT discussed on to the case building and selected buildings. SGSPCT systems have been compared with their advantages and disadvantages to the other systems. Advantages of cable-truss systems and SGSPCT have been concluded that the use of glass substrates in the last chapter.

Keywords: cable truss, glass, grid shell, transparency

Procedia PDF Downloads 408
1539 Analysis of Waterjet Propulsion System for an Amphibious Vehicle

Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian

Abstract:

This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.

Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion

Procedia PDF Downloads 221
1538 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India

Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel

Abstract:

Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.

Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM

Procedia PDF Downloads 242
1537 Sequence Component-Based Adaptive Protection for Microgrids Connected Power Systems

Authors: Isabelle Snyder

Abstract:

Microgrid protection presents challenges to conventional protection techniques due to the low induced fault current. Protection relays present in microgrid applications require a combination of settings groups to adjust based on the architecture of the microgrid in islanded and grid-connected mode. In a radial system where the microgrid is at the other end of the feeder, directional elements can be used to identify the direction of the fault current and switch settings groups accordingly (grid connected or microgrid connected). However, with multiple microgrid connections, this concept becomes more challenging, and the direction of the current alone is not sufficient to identify the source of the fault current contribution. ORNL has previously developed adaptive relaying schemes through other DOE-funded research projects that will be evaluated and used as a baseline for this research. The four protection techniques in this study are the following: (1) Adaptive Current only Protection System (ACPS), Intentional (2) Unbalanced Control for Protection Control (IUCPC), (3) Adaptive Protection System with Communication Controller (APSCC) (4) Adaptive Model-Driven Protective Relay (AMDPR). The first two methods focus on identifying the islanded mode without communication by monitoring the current sequence component generated by the system (ACPS) or induced with inverter control during islanded mode (IUCPC) to identify the islanding condition without communication at the relay to adjust the settings. These two methods are used as a backup to the APSCC, which relies on a communication network to communicate the islanded configuration to the system components. The fourth method relies on a short circuit model inside the relay that is used in conjunction with communication to adjust the system configuration and computes the fault current and adjusts the settings accordingly.

Keywords: adaptive relaying, microgrid protection, sequence components, islanding detection, communication controlled protection, integrated short circuit model

Procedia PDF Downloads 91
1536 Multiscale Simulation of Absolute Permeability in Carbonate Samples Using 3D X-Ray Micro Computed Tomography Images Textures

Authors: M. S. Jouini, A. Al-Sumaiti, M. Tembely, K. Rahimov

Abstract:

Characterizing rock properties of carbonate reservoirs is highly challenging because of rock heterogeneities revealed at several length scales. In the last two decades, the Digital Rock Physics (DRP) approach was implemented successfully in sandstone rocks reservoirs in order to understand rock properties behaviour at the pore scale. This approach uses 3D X-ray Microtomography images to characterize pore network and also simulate rock properties from these images. Even though, DRP is able to predict realistic rock properties results in sandstone reservoirs it is still suffering from a lack of clear workflow in carbonate rocks. The main challenge is the integration of properties simulated at different scales in order to obtain the effective rock property of core plugs. In this paper, we propose several approaches to characterize absolute permeability in some carbonate core plugs samples using multi-scale numerical simulation workflow. In this study, we propose a procedure to simulate porosity and absolute permeability of a carbonate rock sample using textures of Micro-Computed Tomography images. First, we discretize X-Ray Micro-CT image into a regular grid. Then, we use a textural parametric model to classify each cell of the grid using supervised classification. The main parameters are first and second order statistics such as mean, variance, range and autocorrelations computed from sub-bands obtained after wavelet decomposition. Furthermore, we fill permeability property in each cell using two strategies based on numerical simulation values obtained locally on subsets. Finally, we simulate numerically the effective permeability using Darcy’s law simulator. Results obtained for studied carbonate sample shows good agreement with the experimental property.

Keywords: multiscale modeling, permeability, texture, micro-tomography images

Procedia PDF Downloads 181
1535 Local Energy and Flexibility Markets to Foster Demand Response Services within the Energy Community

Authors: Eduardo Rodrigues, Gisela Mendes, José M. Torres, José E. Sousa

Abstract:

In the sequence of the liberalisation of the electricity sector a progressive engagement of consumers has been considered and targeted by sector regulatory policies. With the objective of promoting market competition while protecting consumers interests, by transferring some of the upstream benefits to the end users while reaching a fair distribution of system costs, different market models to value consumers’ demand flexibility at the energy community level are envisioned. Local Energy and Flexibility Markets (LEFM) involve stakeholders interested in providing or procure local flexibility for community, services and markets’ value. Under the scope of DOMINOES, a European research project supported by Horizon 2020, the local market concept developed is expected to: • Enable consumers/prosumers empowerment, by allowing them to value their demand flexibility and Distributed Energy Resources (DER); • Value local liquid flexibility to support innovative distribution grid management, e.g., local balancing and congestion management, voltage control and grid restoration; • Ease the wholesale market uptake of DER, namely small-scale flexible loads aggregation as Virtual Power Plants (VPPs), facilitating Demand Response (DR) service provision; • Optimise the management and local sharing of Renewable Energy Sources (RES) in Medium Voltage (MV) and Low Voltage (LV) grids, trough energy transactions within an energy community; • Enhance the development of energy markets through innovative business models, compatible with ongoing policy developments, that promote the easy access of retailers and other service providers to the local markets, allowing them to take advantage of communities’ flexibility to optimise their portfolio and subsequently their participation in external markets. The general concept proposed foresees a flow of market actions, technical validations, subsequent deliveries of energy and/or flexibility and balance settlements. Since the market operation should be dynamic and capable of addressing different requests, either prioritising balancing and prosumer services or system’s operation, direct procurement of flexibility within the local market must also be considered. This paper aims to highlight the research on the definition of suitable DR models to be used by the Distribution System Operator (DSO), in case of technical needs, and by the retailer, mainly for portfolio optimisation and solve unbalances. The models to be proposed and implemented within relevant smart distribution grid and microgrid validation environments, are focused on day-ahead and intraday operation scenarios, for predictive management and near-real-time control respectively under the DSO’s perspective. At local level, the DSO will be able to procure flexibility in advance to tackle different grid constrains (e.g., demand peaks, forecasted voltage and current problems and maintenance works), or during the operating day-to-day, to answer unpredictable constraints (e.g., outages, frequency deviations and voltage problems). Due to the inherent risks of their active market participation retailers may resort to DR models to manage their portfolio, by optimising their market actions and solve unbalances. The interaction among the market actors involved in the DR activation and in flexibility exchange is explained by a set of sequence diagrams for the DR modes of use from the DSO and the energy provider perspectives. • DR for DSO’s predictive management – before the operating day; • DR for DSO’s real-time control – during the operating day; • DR for retailer’s day-ahead operation; • DR for retailer’s intraday operation.

Keywords: demand response, energy communities, flexible demand, local energy and flexibility markets

Procedia PDF Downloads 97
1534 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 118
1533 Homogenization of Cocoa Beans Fermentation to Upgrade Quality Using an Original Improved Fermenter

Authors: Aka S. Koffi, N’Goran Yao, Philippe Bastide, Denis Bruneau, Diby Kadjo

Abstract:

Cocoa beans (Theobroma cocoa L.) are the main components for chocolate manufacturing. The beans must be correctly fermented at first. Traditional process to perform the first fermentation (lactic fermentation) often consists in confining cacao beans using banana leaves or a fermentation basket, both of them leading to a poor product thermal insulation and to an inability to mix the product. Box fermenter reduces this loss by using a wood with large thickness (e>3cm), but mixing to homogenize the product is still hard to perform. Automatic fermenters are not rentable for most of producers. Heat (T>45°C) and acidity produced during the fermentation by microbiology activity of yeasts and bacteria are enabling the emergence of potential flavor and taste of future chocolate. In this study, a cylindro-rotative fermenter (FCR-V1) has been built and coconut fibers were used in its structure to confine heat. An axis of rotation (360°) has been integrated to facilitate the turning and homogenization of beans in the fermenter. This axis permits to put fermenter in a vertical position during the anaerobic alcoholic phase of fermentation, and horizontally during acetic phase to take advantage of the mid height filling. For circulation of air flow during turning in acetic phase, two woven rattan with grid have been made, one for the top and second for the bottom of the fermenter. In order to reduce air flow during acetic phase, two airtight covers are put on each grid cover. The efficiency of the turning by this kind of rotation, coupled with homogenization of the temperature, caused by the horizontal position in the acetic phase of the fermenter, contribute to having a good proportion of well-fermented beans (83.23%). In addition, beans’pH values ranged between 4.5 and 5.5. These values are ideal for enzymatic activity in the production of the aromatic compounds inside beans. The regularity of mass loss during all fermentation makes it possible to predict the drying surface corresponding to the amount being fermented.

Keywords: cocoa fermentation, fermenter, microbial activity, temperature, turning

Procedia PDF Downloads 258
1532 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 61
1531 Finite Volume Method in Loop Network in Hydraulic Transient

Authors: Hossain Samani, Mohammad Ehteram

Abstract:

In this paper, we consider finite volume method (FVM) in water hammer. We will simulate these techniques on a looped network with complex boundary conditions. After comparing methods, we see the FVM method as the best method. We compare the results of FVM with experimental data. Finite volume using staggered grid is applied for solving water hammer equations.

Keywords: hydraulic transient, water hammer, interpolation, non-liner interpolation

Procedia PDF Downloads 344
1530 Audit Examining Maternity Assessment Suite Triage Compliance with Birmingham Symptom Specific Obstetric Triage System in a London Teaching Hospital

Authors: Sarah Atalla, Shubham Gupta, Kim Alipio, Tanya Maric

Abstract:

Background: Chelsea and Westminster Hospital have introduced the Birmingham Symptom Specific Obstetric Triage System (BSOTS) for patients who present acutely to the Maternity Assessment Suite (MAS) to prioritise care by urgency. The primary objective was to evaluate whether BSOTS was used appropriately to assess patients (defined as a 90% threshold). The secondary objective was to assess whether patients were seen within their designated triaged timeframe (defined as a 90% threshold). Methodology: MAS records were retrospectively reviewed for a randomly selected one-week period of data from 2020 (21/09/2020 - 27/09/2020). 189 patients presented to MAS during this time. Data were collected on the presenting complaint, time of attendance (divided into four time categories), and triage colour code for the urgency of a review by a doctor (red: immediately, orange: within 15 minutes, yellow: within 1 hour, green: within 4 hours). The number of triage waiting times that were breached and the outcome of the attendance was noted. Results: 49% of patients presenting to MAS during this time period were triaged, which therefore did not meet the 90% target. 67% of patients who were triaged were seen within their allocated timeframe as designated by their triage colour code, which therefore did not meet the 90% target. The most frequent reason for patient attendance was reduced fetal movements (30.5% of attendances). The busiest time of day (when most patients presented) was between 06:01-12:00, and this was also when the highest number of patients were not triaged (26 patients or 54% of patients presenting in this time category). The most used triage category (59%) was the green colour code (to be seen by a doctor within 4 hours), followed by orange (24%), yellow (14%), and red (3%). 45% of triaged patients were admitted, whilst 55% were discharged. 62% of patients allocated to the green triage category were discharged, as compared to 56% of yellow category patients, 27% of orange category patients, and 50% of red category patients. The time of patient presentation to the hospital was also associated with the level of urgency and outcome. Patients presenting from 12:01 to 18:00 were more likely to be discharged (72% discharged) compared to 00:01-06:00 where only 12.5% of patients were discharged. Conclusion: The triage system for assessing the urgency of acutely presenting obstetric patients is only being effectively utilised for 49% of patients. There is potential for enhancing the employment of the triage system to enable further efficiency and boost the promotion of patient safety. It is noted that MAS was busiest at 06:01 - 12:00 when there was also the highest number of non-triaged patients – this highlights some areas where we can improve, including higher levels of staffing, better use of BSOTS to triage patients, and patient education.

Keywords: birmingham, BSOTS, maternal, obstetric, pregnancy, specific, symptom, triage

Procedia PDF Downloads 100
1529 The Position of Islamic Jurisprudence in UAE Private Law: Analytical Study

Authors: Iyad Jadalhaq, Mohammed El Hadi El Maknouzi

Abstract:

The place of Islamic law in the legal system of the UAE is best understood by introducing a differentiation between its role as a formal source of law and its influence as a material source of law. What this differentiation helps clarify is that the corpus of Islamic law constitutes a much deeper influence on adjudication, law-making and the legal profession in the UAE, than it might appear at first sight, by considering its formal position in the division of labor between courts, or legislative lists of sources of law. This paper aims to examine the role of Shariah in the UAE private law system by determining the comprehensiveness of Sharia in the legal system as a whole, and not in a limited way related to it as a source of law according to Article 1 of the Civil Transactions Law. Turning to the role of the Shariah as a formal source of law, it is useful to start from Article 1 of the UAE Civil Code. This provision lays out the formal hierarchy of sources of UAE private law, these being legislation, Islamic law, and custom. Hence, when deciding a civil dispute, a judge should first refer to positive legislation in force in the UAE. Lacking the rule to cover the case before him/her, the judge ought then to refer directly to Islamic law. If the matter lacks regulation in Islamic law, only then may the judge appeal to custom. Accordingly, in connection to civil transactions, Shariah is presented here, formally, as the second source of law. Still, Shariah law addresses many other issues beyond civil transactions, including matters of morals, worship, and belief. However, in Article 1 of the UAE Civil Code, the reference to Islamic law ought to be understood as limited to the rules it lays out for civil transactions. There are four main sets of courts in the judicial systems of the UAE, whose competence is based on whether a dispute touches upon civil and commercial transactions, criminal offenses, personal statuses, or labor relations. This sectorial and multi-tiered organization of courts as a whole constitutes an institutional development compatible with the long-standing affirmation in the Shariah of the legitimacy of the judiciary. Indeed, Islamic law authorizes the governing authorities to organize the judiciary, including by allocating specific types of cases to particular kinds of judges depending on the value of the case, or by assigning judges to a specific place in which they are to exercise their jurisdictional function. In view of this, the contemporary organization of courts in the UAE can be regarded as an organic adaptation, aligned with Shariah rules on the assignment of jurisdictional authority, to the growing complexity of modern society. Therefore, we can conclude to the comprehensive role of Shariah in the entire legal system of the United Arab Emirates, including legislation, a judicial system, institutional, and administrative work.

Keywords: Islamic jurisprudence, Shariah, UAE civil code, UAE private law

Procedia PDF Downloads 117
1528 An Evaluation of Full-Scale Reinforced Concrete and Steel Girder Composite Members Using High Volume Fly-Ash

Authors: Sung-Won Yoo, Chul-Hyeon Kang, Kyoung-Tae Park, Hae-Sik Woo

Abstract:

Numerous studies were dedicated on the High Volume Fly-Ash (HVFA) concrete using high volume fly ash. The material properties of HVFA concrete have been the primordial topics of early studies, and interest shifted gradually toward the structural behavior of HVFA concrete such as elasticity modulus, stress-strain relationship, and structural behavior. However, structural studies consider small-scale members limited to the scope of reinforced concrete only. Therefore, in this paper, on the basis of recent studies on the structural behavior, 2 full-scale test members were manufactured with 7.5 m span length, fly ash replacement ratio of 50 % and concrete compressive strength of 50 MPa in order to evaluate the practicability of HVFA to real structures. In addition, 2 steel composite test members were also manufactured with span length of 3 m and using the same HVFA concrete for the same purpose. The test results of full-scale RC members showed that the practical use of HVFA on such structures is not hard despite small differences between test results and existing research results on the stress-strain relationship. The flexural test revealed very little difference between 50% fly ash concrete and general concrete in view of the similarity exhibited by the displacement and strain patterns. The experimental concrete shear strength being very close to that of design code, the existing design code can be applied. From the flexural test results of steel girder composite members, the composite behavior can be secured as much as that using normal concrete under the condition of sufficient arrangement of reinforcing bar.

Keywords: composite, fly ash, full-scale, high volume

Procedia PDF Downloads 215
1527 Audit Committee Characteristics and Earnings Quality of Listed Food and Beverages Firms in Nigeria

Authors: Hussaini Bala

Abstract:

There are different opinions in the literature on the relationship between Audit Committee characteristics and earnings management. The mix of opinions makes the direction of their relationship ambiguous. This study investigated the relationship between Audit Committee characteristics and earnings management of listed food and beverages Firms in Nigeria. The study covered the period of six years from 2007 to 2012. Data for the study were extracted from the Firms’ annual reports and accounts. After running the OLS regression, a robustness test was conducted for the validity of statistical inferences. The dependent variable was generated using two steps regression in order to determine the discretionary accrual of the sample Firms. Multiple regression was employed to run the data of the study using Random Model. The results from the analysis revealed a significant association between audit committee characteristics and earnings management of the Firms. While audit committee size and committees’ financial expertise showed an inverse relationship with earnings management, committee’s independence, and frequency of meetings are positively and significantly related to earnings management. In line with the findings, the study recommended among others that listed food and beverages Firms in Nigeria should strictly comply with the provision of Companies and Allied Matters Act (CAMA) and SEC Code of Corporate Governance on the issues regarding Audit Committees. Regulators such as SEC should increase the minimum number of Audit Committee members with financial expertise and also have a statutory position on the maximum number of Audit Committees meetings, which should not be greater than four meetings in a year as SEC code of corporate governance is silent on this.

Keywords: audit committee, earnings management, listed Food and beverages size, leverage, Nigeria

Procedia PDF Downloads 263
1526 Environmental Fatigue Analysis for Control Rod Drive Mechanisms Seal House

Authors: Xuejiao Shao, Jianguo Chen, Xiaolong Fu

Abstract:

In this paper, the elastoplastic strain correction factor computed by software of ANSYS was modified, and the fatigue usage factor in air was also corrected considering in water under reactor operating condition. The fatigue of key parts on control rod drive mechanisms was analyzed considering the influence of environmental fatigue caused by the coolant in the react pressure vessel. The elastoplastic strain correction factor was modified by analyzing thermal and mechanical loads separately referring the rules of RCC-M 2002. The new elastoplastic strain correction factor Ke(mix) is computed to replace the original Ke computed by the software of ANSYS when evaluating the fatigue produced by thermal and mechanical loads together. Based on the Ke(mix) and the usage cycle and fatigue design curves, the new range of primary plus secondary stresses was evaluated to obtain the final fatigue usage factor. The results show that the precision of fatigue usage factor can be elevated by using modified Ke when the amplify of the primary and secondary stress is large to some extent. One approach has been proposed for incorporating the environmental effects considering the effects of reactor coolant environments on fatigue life in terms of an environmental correction factor Fen, which is the ratio of fatigue life in air at room. To incorporate environmental effects into the RCCM Code fatigue evaluations, the fatigue usage factor based on the current Code design curves is multiplied by the correction factor. The contribution of environmental effects to results is discussed. Fatigue life decreases logarithmically with decreasing strain rate below 10%/s, which is insensitive to strain rate when temperatures below 100°C.

Keywords: environmental fatigue, usage factor, elastoplastic strain correction factor, environmental correction

Procedia PDF Downloads 315
1525 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions

Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri

Abstract:

Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.

Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers

Procedia PDF Downloads 129
1524 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling

Authors: A. K. Borah, A. K. Singh

Abstract:

In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.

Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm

Procedia PDF Downloads 525
1523 Improvement of Environment and Climate Change Canada’s Gem-Hydro Streamflow Forecasting System

Authors: Etienne Gaborit, Dorothy Durnford, Daniel Deacu, Marco Carrera, Nathalie Gauthier, Camille Garnaud, Vincent Fortin

Abstract:

A new experimental streamflow forecasting system was recently implemented at the Environment and Climate Change Canada’s (ECCC) Canadian Centre for Meteorological and Environmental Prediction (CCMEP). It relies on CaLDAS (Canadian Land Data Assimilation System) for the assimilation of surface variables, and on a surface prediction system that feeds a routing component. The surface energy and water budgets are simulated with the SVS (Soil, Vegetation, and Snow) Land-Surface Scheme (LSS) at 2.5-km grid spacing over Canada. The routing component is based on the Watroute routing scheme at 1-km grid spacing for the Great Lakes and Nelson River watersheds. The system is run in two distinct phases: an analysis part and a forecast part. During the analysis part, CaLDAS outputs are used to force the routing system, which performs streamflow assimilation. In forecast mode, the surface component is forced with the Canadian GEM atmospheric forecasts and is initialized with a CaLDAS analysis. Streamflow performances of this new system are presented over 2019. Performances are compared to the current ECCC’s operational streamflow forecasting system, which is different from the new experimental system in many aspects. These new streamflow forecasts are also compared to persistence. Overall, the new streamflow forecasting system presents promising results, highlighting the need for an elaborated assimilation phase before performing the forecasts. However, the system is still experimental and is continuously being improved. Some major recent improvements are presented here and include, for example, the assimilation of snow cover data from remote sensing, a backward propagation of assimilated flow observations, a new numerical scheme for the routing component, and a new reservoir model.

Keywords: assimilation system, distributed physical model, offline hydro-meteorological chain, short-term streamflow forecasts

Procedia PDF Downloads 128
1522 Assessment of Air Pollutant Dispersion and Soil Contamination: The Critical Role of MATLAB Modeling in Evaluating Emissions from the Covanta Municipal Solid Waste Incineration Facility

Authors: Jadon Matthiasa, Cindy Donga, Ali Al Jibouria, Hsin Kuo

Abstract:

The environmental impact of emissions from the Covanta Waste-to-Energy facility in Burnaby, BC, was comprehensively evaluated, focusing on the dispersion of air pollutants and the subsequent assessment of heavy metal contamination in surrounding soils. A Gaussian Plume Model, implemented in MATLAB, was utilized to simulate the dispersion of key pollutants to understand their atmospheric behaviour and potential deposition patterns. The MATLAB code developed for this study enhanced the accuracy of pollutant concentration predictions and provided capabilities for visualizing pollutant dispersion in 3D plots. Furthermore, the code could predict the maximum concentration of pollutants at ground level, eliminating the need to use the Ranchoux model for predictions. Complementing the modelling approach, empirical soil sampling and analysis were conducted to evaluate heavy metal concentrations in the vicinity of the facility. This integrated methodology underscored the importance of computational modelling in air pollution assessment and highlighted the necessity of soil analysis to obtain a holistic understanding of environmental impacts. The findings emphasized the effectiveness of current emissions controls while advocating for ongoing monitoring to safeguard public health and environmental integrity.

Keywords: air emissions, Gaussian Plume Model, MATLAB, soil contamination, air pollution monitoring, waste-to-energy, pollutant dispersion visualization, heavy metal analysis, environmental impact assessment, emission control effectiveness

Procedia PDF Downloads 3
1521 Microstructure Evolution and Pre-transformation Microstructure Reconstruction in Ti-6Al-4V Alloy

Authors: Shreyash Hadke, Manendra Singh Parihar, Rajesh Khatirkar

Abstract:

In the present investigation, the variation in the microstructure with the changes in the heat treatment conditions i.e. temperature and time was observed. Ti-6Al-4V alloy was subject to solution annealing treatments in β (1066C) and α+β phase (930C and 850C) followed by quenching, air cooling and furnace cooling to room temperature respectively. The effect of solution annealing and cooling on the microstructure was studied by using optical microscopy (OM), scanning electron microscopy (SEM), electron backscattered diffraction (EBSD) and x-ray diffraction (XRD). The chemical composition of the β phase for different conditions was determined with the help of energy dispersive spectrometer (EDS) attached to SEM. Furnace cooling resulted in the development of coarser structure (α+β), while air cooling resulted in much finer structure with widmanstatten morphology of α at the grain boundaries. Quenching from solution annealing temperature formed α’ martensite, their proportion being dependent on the temperature in β phase field. It is well known that the transformation of β to α follows Burger orientation relationship (OR). In order to reconstruct the microstructure of parent β phase, a MATLAB code was written using neighbor-to-neighbor, triplet method and Tari’s method. The code was tested on the annealed samples (1066C solution annealing temperature followed by furnace cooling to room temperature). The parent phase data thus generated was then plotted using the TSL-OIM software. The reconstruction results of the above methods were compared and analyzed. The Tari’s approach (clustering approach) gave better results compared to neighbor-to-neighbor and triplet method but the time taken by the triplet method was least compared to the other two methods.

Keywords: Ti-6Al-4V alloy, microstructure, electron backscattered diffraction, parent phase reconstruction

Procedia PDF Downloads 443