Search results for: grey code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1569

Search results for: grey code

399 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir

Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede

Abstract:

Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.

Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution

Procedia PDF Downloads 120
398 Co-Culture with Murine Stromal Cells Enhances the In-vitro Expansion of Hematopoietic Stem Cells in Response to Low Concentrations of Trans-Resveratrol

Authors: Mariyah Poonawala, Selvan Ravindran, Anuradha Vaidya

Abstract:

Despite much progress in understanding the regulatory factors and cytokines that support the maturation of the various cell lineages of the hematopoietic system, factors that govern the self-renewal and proliferation of hematopoietic stem cells (HSCs) is still a grey area of research. Hematopoietic stem cell transplantation (HSCT) has evolved over the years and gained tremendous importance in the treatment of both malignant and non-malignant diseases. However, factors such as graft rejection and multiple organ failure have challenged HSCT from time to time, underscoring the urgent need for development of milder processes for successful hematopoietic transplantation. An emerging concept in the field of stem cell biology states that the interactions between the bone-marrow micro-environment and the hematopoietic stem and progenitor cells is essential for regulation, maintenance, commitment and proliferation of stem cells. Understanding the role of mesenchymal stromal cells in modulating the functionality of HSCs is, therefore, an important area of research. Trans-resveratrol has been extensively studied for its various properties to combat and prevent cancer, diabetes and cardiovascular diseases etc. The aim of the present study was to understand the effect of trans-resveratrol on HSCs using single and co-culture systems. We have used KG1a cells since it is a well accepted hematopoietic stem cell model system. Our preliminary experiments showed that low concentrations of trans-resveratrol stimulated the HSCs to undergo proliferation whereas high concentrations of trans-resveratrol did not stimulate the cells to proliferate. We used a murine fibroblast cell line, M210B4, as a stromal feeder layer. On culturing the KG1a cells with M210B4 cells, we observed that the stimulatory as well as inhibitory effects of trans-resveratrol at low and high concentrations respectively, were enhanced. Our further experiments showed that low concentration of trans-resveratrol reduced the generation of reactive oxygen species (ROS) and nitric oxide (NO) whereas high concentrations increased the oxidative stress in KG1a cells. We speculated that perhaps the oxidative stress was imposing inhibitory effects at high concentration and the same was confirmed by performing an apoptotic assay. Furthermore, cell cycle analysis and growth kinetic experiments provided evidence that low concentration of trans-resveratrol reduced the doubling time of the cells. Our hypothesis is that perhaps at low concentration of trans-resveratrol the cells get pushed into the G0/G1 phase and re-enter the cell cycle resulting in their proliferation, whereas at high concentration the cells are perhaps arrested at G2/M phase or at cytokinesis and therefore undergo apoptosis. Liquid Chromatography-Quantitative-Time of Flight–Mass Spectroscopy (LC-Q-TOF MS) analyses indicated the presence of trans-resveratrol and its metabolite(s) in the supernatant of the co-cultured cells incubated with high concentration of trans-resveratrol. We conjecture that perhaps the metabolites of trans-resveratrol are responsible for the apoptosis observed at the high concentration. Our findings may shed light on the unsolved problems in the in vitro expansion of stem cells and may have implications in the ex vivo manipulation of HSCs for therapeutic purposes.

Keywords: co-culture system, hematopoietic micro-environment, KG1a cell line, M210B4 cell line, trans-resveratrol

Procedia PDF Downloads 237
397 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 264
396 Large Eddy Simulation with Energy-Conserving Schemes: Understanding Wind Farm Aerodynamics

Authors: Dhruv Mehta, Alexander van Zuijlen, Hester Bijl

Abstract:

Large Eddy Simulation (LES) numerically resolves the large energy-containing eddies of a turbulent flow, while modelling the small dissipative eddies. On a wind farm, these large scales carry the energy wind turbines extracts and are also responsible for transporting the turbines’ wakes, which may interact with downstream turbines and certainly with the atmospheric boundary layer (ABL). In this situation, it is important to conserve the energy that these wake’s carry and which could be altered artificially through numerical dissipation brought about by the schemes used for the spatial discretisation and temporal integration. Numerical dissipation has been reported to cause the premature recovery of turbine wakes, leading to an over prediction in the power produced by wind farms.An energy-conserving scheme is free from numerical dissipation and ensures that the energy of the wakes is increased or decreased only by the action of molecular viscosity or the action of wind turbines (body forces). The aim is to create an LES package with energy-conserving schemes to simulate wind turbine wakes correctly to gain insight into power-production, wake meandering etc. Such knowledge will be useful in designing more efficient wind farms with minimal wake interaction, which if unchecked could lead to major losses in energy production per unit area of the wind farm. For their research, the authors intend to use the Energy-Conserving Navier-Stokes code developed by the Energy Research Centre of the Netherlands.

Keywords: energy-conserving schemes, modelling turbulence, Large Eddy Simulation, atmospheric boundary layer

Procedia PDF Downloads 450
395 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems

Authors: Bronwen Wade

Abstract:

Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.

Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality

Procedia PDF Downloads 33
394 Environmental Effect on Corrosion Fatigue Behaviors of Steam Generator Forging in Simulated Pressurized Water Reactor Environment

Authors: Yakui Bai, Chen Sun, Ke Wang

Abstract:

An experimental investigation of environmental effect on fatigue behavior in SA508 Gr.3 Cl.2 Steam Generator Forging CAP1400 nuclear power plant has been carried out. In order to simulate actual loading condition, a range of strain amplitude was applied in different low cycle fatigue (LCF) tests. The current American Society of Mechanical Engineers (ASME) design fatigue code does not take full account of the interactions of environmental, loading, and material's factors. A range of strain amplitude was applied in different low cycle fatigue (LCF) tests at a strain rate of 0.01%s⁻¹. A design fatigue model was constructed by taking environmentally assisted fatigue effects into account, and the corresponding design curves were given for the convenience of engineering applications. The corrosion fatigue experiment was performed in a strain control mode in 320℃ borated and lithiated water environment to evaluate the effects of a mixed environment on fatigue life. Stress corrosion cracking (SCC) in steam generator large forging in primary water of pressurized water reactor was also observed. In addition, it is found that the CF life of SA508 Gr.3 Cl.2 decreases with increasing temperature in the water environment. The relationship between the reciprocal of temperature and the logarithm of fatigue life was found to be linear. Through experiments and subsequent analysis, the mechanisms of reduced low cycle fatigue life have been investigated for steam generator forging.

Keywords: failure behavior, low alloy steel, steam generator forging, stress corrosion cracking

Procedia PDF Downloads 103
393 A Three-Dimensional (3D) Numerical Study of Roofs Shape Impact on Air Quality in Urban Street Canyons with Tree Planting

Authors: Bouabdellah Abed, Mohamed Bouzit, Lakhdar Bouarbi

Abstract:

The objective of this study is to investigate numerically the effect of roof shaped on wind flow and pollutant dispersion in a street canyon with one row of trees of pore volume, Pvol = 96%. A three-dimensional computational fluid dynamics (CFD) model for evaluating air flow and pollutant dispersion within an urban street canyon using Reynolds-averaged Navier–Stokes (RANS) equations and the k-Epsilon EARSM turbulence model as close of the equation system. The numerical model is performed with ANSYS-CFX code. Vehicle emissions were simulated as double line sources along the street. The numerical model was validated against the wind tunnel experiment. Having established this, the wind flow and pollutant dispersion in urban street canyons of six roof shapes are simulated. The numerical simulation agrees reasonably with the wind tunnel data. The results obtained in this work, indicate that the flow in 3D domain is more complicated, this complexity is increased with presence of tree and variability of the roof shapes. The results also indicated that the largest pollutant concentration level for two walls (leeward and windward wall) is observed with the upwind wedge-shaped roof. But the smallest pollutant concentration level is observed with the dome roof-shaped. The results also indicated that the corners eddies provide additional ventilation and lead to lower traffic pollutant concentrations at the street canyon ends.

Keywords: street canyon, pollutant dispersion, trees, building configuration, numerical simulation, k-Epsilon EARSM

Procedia PDF Downloads 337
392 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 124
391 Behavior of the RC Slab Subjected to Impact Loading According to the DIF

Authors: Yong Jae Yu, Jae-Yeol Cho

Abstract:

In the design of structural concrete for impact loading, design or model codes often employ a dynamic increase factor (DIF) to impose dynamic effect on static response. Dynamic increase factors that are obtained from laboratory material test results and that are commonly given as a function of strain rate only are quite different from each other depending on the design concept of design codes like ACI 349M-06, fib Model Code 2010 and ACI 370R-14. Because the dynamic increase factors currently adopted in the codes are too simple and limited to consider a variety of strength of materials, their application in practical design is questionable. In this study, the dynamic increase factors used in the three codes were validated through the finite element analysis of reinforced concrete slab elements which were tested and reported by other researcher. The test was intended to simulate a wall element of the containment building in nuclear power plants that is assumed to be subject to impact scenario that the Pentagon experienced on September 11, 2001. The finite element analysis was performed using the ABAQAUS 6.10 and the plasticity models were employed for the concrete, reinforcement. The dynamic increase factors given in the three codes were applied to the stress-strain curves of the materials. To estimate the dynamic increase factors, strain rate was adopted as a parameter. Comparison of the test and analysis was done with regard to perforation depth, maximum deflection, and surface crack area of the slab. Consequently, it was found that DIF has so great an effect on the behavior of the reinforced concrete structures that selection of DIF should be very careful. The result implies that DIF should be provided in design codes in more delicate format considering various influence factors.

Keywords: impact, strain rate, DIF, slab elements

Procedia PDF Downloads 278
390 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure

Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar

Abstract:

This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T_1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.

Keywords: collapse capacity, fragility analysis, spectral shape effects, IDA method

Procedia PDF Downloads 214
389 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast

Authors: Majid Samiee Zenoozian

Abstract:

The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.

Keywords: sediment transport, storm, coast, fluid dynamics

Procedia PDF Downloads 91
388 Towards Sustainable Evolution of Bioeconomy: The Role of Technology and Innovation Management

Authors: Ronald Orth, Johanna Haunschild, Sara Tsog

Abstract:

The bioeconomy is an inter- and cross-disciplinary field covering a large number and wide scope of existing and emerging technologies. It has a great potential to contribute to the transformation process of industry landscape and ultimately drive the economy towards sustainability. However, bioeconomy per se is not necessarily sustainable and technology should be seen as an enabler rather than panacea to all our ecological, social and economic issues. Therefore, to draw and maximize benefits from bioeconomy in terms of sustainability, we propose that innovative activities should encompass not only novel technologies and bio-based new materials but also multifocal innovations. For multifocal innovation endeavors, innovation management plays a substantial role, as any innovation emerges in a complex iterative process where communication and knowledge exchange among relevant stake holders has a pivotal role. The knowledge generation and innovation are although at the core of transition towards a more sustainable bio-based economy, to date, there is a significant lack of concepts and models that approach bioeconomy from the innovation management approach. The aim of this paper is therefore two-fold. First, it inspects the role of transformative approach in the adaptation of bioeconomy that contributes to the environmental, ecological, social and economic sustainability. Second, it elaborates the importance of technology and innovation management as a tool for smooth, prompt and effective transition of firms to the bioeconomy. We conduct a qualitative literature study on the sustainability challenges that bioeconomy entails thus far using Science Citation Index and based on grey literature, as major economies e.g. EU, USA, China and Brazil have pledged to adopt bioeconomy and have released extensive publications on the topic. We will draw an example on the forest based business sector that is transforming towards the new green economy more rapidly as expected, although this sector has a long-established conventional business culture with consolidated and fully fledged industry. Based on our analysis we found that a successful transition to sustainable bioeconomy is conditioned on heterogenous and contested factors in terms of stakeholders , activities and modes of innovation. In addition, multifocal innovations occur when actors from interdisciplinary fields engage in intensive and continuous interaction where the focus of innovation is allocated to a field of mutually evolving socio-technical practices that correspond to the aims of the novel paradigm of transformative innovation policy. By adopting an integrated and systems approach as well as tapping into various innovation networks and joining global innovation clusters, firms have better chance of creating an entire new chain of value added products and services. This requires professionals that have certain capabilities and skills such as: foresight for future markets, ability to deal with complex issues, ability to guide responsible R&D, ability of strategic decision making, manage in-depth innovation systems analysis including value chain analysis. Policy makers, on the other hand, need to acknowledge the essential role of firms in the transformative innovation policy paradigm.

Keywords: bioeconomy, innovation and technology management, multifocal innovation, sustainability, transformative innovation policy

Procedia PDF Downloads 109
387 Combined Civilian and Military Disaster Response: A Critical Analysis of the 2010 Haiti Earthquake Relief Effort

Authors: Matthew Arnaouti, Michael Baird, Gabrielle Cahill, Tamara Worlton, Michelle Joseph

Abstract:

Introduction: Over ten years after the 7.0 magnitude Earthquake struck the capital of Haiti, impacting over three million people and leading to the deaths of over two hundred thousand, the multinational humanitarian response remains the largest disaster relief effort to date. This study critically evaluates the multi-sector and multinational disaster response to the Earthquake, looking at how the lessons learned from this analysis can be applied to future disaster response efforts. We put particular emphasis on assessing the interaction between civilian and military sectors during this humanitarian relief effort, with the hopes of highlighting how concrete guidelines are essential to improve future responses. Methods: An extensive scoping review of the relevant literature was conducted - where library scientists conducted reproducible, verified systematic searches of multiple databases. Grey literature and hand searches were utilised to identify additional unclassified military documents, for inclusion in the study. More than 100 documents were included for data extraction and analysis. Key domains were identified, these included: Humanitarian and Military Response, Communication, Coordination, Resources, Needs Assessment and Pre-Existing Policy. Corresponding information and lessons-learned pertaining to these domains was then extracted - detailing the barriers and facilitators to an effective response. Results: Multiple themes were noted which stratified all identified domains - including the lack of adequate pre-existing policy, as well as extensive ambiguity of actors’ roles. This ambiguity was continually influenced by the complex role the United States military played in the disaster response. At a deeper level, the effects of neo-colonialism and concern about infringements on Haitian sovereignty played a substantial role at all levels: setting the pre-existing conditions and determining the redevelopment efforts that followed. Furthermore, external factors significantly impacted the response, particularly the loss of life within the political and security sectors. This was compounded by the destruction of important infrastructure systems - particularly electricity supplies and telecommunication networks, as well as air and seaport capabilities. Conclusions: This study stands as one of the first and most comprehensive evaluations, systematically analysing the civilian and military response - including their collaborative efforts. This study offers vital information for improving future combined responses and provides a significant opportunity for advancing knowledge in disaster relief efforts - which remains a more pressing issue than ever. The categories and domains formulated serve to highlight interdependent factors that should be applied in future disaster responses, with significant potential to aid the effective performance of humanitarian actors. Further studies will be grounded in these findings, particularly the need for greater inclusion of the Haitian perspective in the literature, through additional qualitative research studies.

Keywords: civilian and military collaboration, combined response, disaster, disaster response, earthquake, Haiti, humanitarian response

Procedia PDF Downloads 105
386 Rainwater Management: A Case Study of Residential Reconstruction of Cultural Heritage Buildings in Russia

Authors: V. Vsevolozhskaia

Abstract:

Since 1990, energy-efficient development concepts have constituted both a turning point in civil engineering and a challenge for an environmentally friendly future. Energy and water currently play an essential role in the sustainable economic growth of the world in general and Russia in particular: the efficiency of the water supply system is the second most important parameter for energy consumption according to the British assessment method, while the water-energy nexus has been identified as a focus for accelerating sustainable growth and developing effective, innovative solutions. The activities considered in this study were aimed at organizing and executing the renovation of the property in residential buildings located in St. Petersburg, specifically buildings with local or federal historical heritage status under the control of the St. Petersburg Committee for the State Inspection and Protection of Historic and Cultural Monuments (KGIOP) and UNESCO. Even after reconstruction, these buildings still fall into energy efficiency class D. Russian Government Resolution No. 87 on the structure and required content of project documentation contains a section entitled ‘Measures to ensure compliance with energy efficiency and equipment requirements for buildings, structures, and constructions with energy metering devices’. Mention is made of the need to install collectors and meters, which only calculate energy, neglecting the main purpose: to make buildings more energy-efficient, potentially even energy efficiency class A. The least-explored aspects of energy-efficient technology in the Russian Federation remain the water balance and the possibility of implementing rain and meltwater collection systems. These modern technologies are used exclusively for new buildings due to a lack of government directive to create project documentation during the planning of major renovations and reconstruction that would include the collection and reuse of rainwater. Energy-efficient technology for rain and meltwater collection is currently applied only to new buildings, even though research has proved that using rainwater is safe and offers a huge step forward in terms of eco-efficiency analysis and water innovation. Where conservation is mandatory, making changes to protected sites is prohibited. In most cases, the protected site is the cultural heritage building itself, including the main walls and roof. However, the installation of a second water supply system and collection of rainwater would not affect the protected building itself. Water efficiency in St. Petersburg is currently considered only from the point of view of the installation that regulates the flow of the pipeline shutoff valves. The development of technical guidelines for the use of grey- and/or rainwater to meet the needs of residential buildings during reconstruction or renovation is not yet complete. The ideas for water treatment, collection and distribution systems presented in this study should be taken into consideration during the reconstruction or renovation of residential cultural heritage buildings under the protection of KGIOP and UNESCO. The methodology applied also has the potential to be extended to other cultural heritage sites in northern countries and lands with an average annual rainfall of over 600 mm to cover average toilet-flush needs.

Keywords: cultural heritage, energy efficiency, renovation, rainwater collection, reconstruction, water management, water supply

Procedia PDF Downloads 80
385 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure

Procedia PDF Downloads 225
384 Symmetric Key Encryption Algorithm Using Indian Traditional Musical Scale for Information Security

Authors: Aishwarya Talapuru, Sri Silpa Padmanabhuni, B. Jyoshna

Abstract:

Cryptography helps in preventing threats to information security by providing various algorithms. This study introduces a new symmetric key encryption algorithm for information security which is linked with the "raagas" which means Indian traditional scale and pattern of music notes. This algorithm takes the plain text as input and starts its encryption process. The algorithm then randomly selects a raaga from the list of raagas that is assumed to be present with both sender and the receiver. The plain text is associated with the thus selected raaga and an intermediate cipher-text is formed as the algorithm converts the plain text characters into other characters, depending upon the rules of the algorithm. This intermediate code or cipher text is arranged in various patterns in three different rounds of encryption performed. The total number of rounds in the algorithm is equal to the multiples of 3. To be more specific, the outcome or output of the sequence of first three rounds is again passed as the input to this sequence of rounds recursively, till the total number of rounds of encryption is performed. The raaga selected by the algorithm and the number of rounds performed will be specified at an arbitrary location in the key, in addition to important information regarding the rounds of encryption, embedded in the key which is known by the sender and interpreted only by the receiver, thereby making the algorithm hack proof. The key can be constructed of any number of bits without any restriction to the size. A software application is also developed to demonstrate this process of encryption, which dynamically takes the plain text as input and readily generates the cipher text as output. Therefore, this algorithm stands as one of the strongest tools for information security.

Keywords: cipher text, cryptography, plaintext, raaga

Procedia PDF Downloads 267
383 A Mixed 3D Finite Element for Highly Deformable Thermoviscoplastic Materials Under Ductile Damage

Authors: João Paulo Pascon

Abstract:

In this work, a mixed 3D finite element formulation is proposed in order to analyze thermoviscoplastic materials under large strain levels and ductile damage. To this end, a tetrahedral element of linear order is employed, considering a thermoviscoplastic constitutive law together with the neo-Hookean hyperelastic relationship and a nonlocal Gurson`s porous plasticity theory The material model is capable of reproducing finite deformations, elastoplastic behavior, void growth, nucleation and coalescence, thermal effects such as plastic work heating and conductivity, strain hardening and strain-rate dependence. The nonlocal character is introduced by means of a nonlocal parameter applied to the Laplacian of the porosity field. The element degrees of freedom are the nodal values of the deformed position, the temperature and the nonlocal porosity field. The internal variables are updated at the Gauss points according to the yield criterion and the evolution laws, including the yield stress of matrix, the equivalent plastic strain, the local porosity and the plastic components of the Cauchy-Green stretch tensor. Two problems involving 3D specimens and ductile damage are numerically analyzed with the developed computational code: the necking problem and a notched sample. The effect of the nonlocal parameter and the mesh refinement is investigated in detail. Results indicate the need of a proper nonlocal parameter. In addition, the numerical formulation can predict ductile fracture, based on the evolution of the fully damaged zone.

Keywords: mixed finite element, large strains, ductile damage, thermoviscoplasticity

Procedia PDF Downloads 70
382 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorraina, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: cruciform specimen, multiaxial fatigue, nickel-based superalloy

Procedia PDF Downloads 273
381 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets

Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira

Abstract:

We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.

Keywords: finite volume methods, central schemes, fortran 90, relativistic astrophysics, jet

Procedia PDF Downloads 426
380 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment

Authors: Asif Ali, Daud Salim Faruquie

Abstract:

With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.

Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture

Procedia PDF Downloads 387
379 Analysis and Performance of European Geostationary Navigation Overlay Service System in North of Algeria for GPS Single Point Positioning

Authors: Tabti Lahouaria, Kahlouche Salem, Benadda Belkacem, Beldjilali Bilal

Abstract:

The European Geostationary Navigation Overlay Service (EGNOS) provides an augmentation signal to GPS (Global Positioning System) single point positioning. Presently EGNOS provides data correction and integrity information using the GPS L1 (1575.42 MHz) frequency band. The main objective of this system is to provide a better real-time positioning precision than using GPS only. They are expected to be used with single-frequency code observations. EGNOS offers navigation performance for an open service (OS), in terms of precision and availability this performance gradually degrades as moving away from the service area. For accurate system performance, the service will become less and less available as the user moves away from the EGNOS service. The improvement in position solution is investigated using the two collocated dual frequency GPS, where no EGNOS Ranging and Integrity Monitoring Station (RIMS) exists. One of the pseudo-range was kept as GPS stand-alone and the other was corrected by EGNOS to estimate the planimetric and altimetric precision for different dates. It is found that precision in position improved significantly in the second due to EGNOS correction. The performance of EGNOS system in the north of Algeria is also investigated in terms of integrity. The results show that the horizontal protection level (HPL) value is below 18.25 meters (95%) and the vertical protection level (VPL) is below 42.22 meters (95 %). These results represent good integrity information transmitted by EGNOS for APV I service. This service is thus compliant with the aviation requirements for Approaches with Vertical Guidance (APV-I), which is characterised by 40 m HAL (horizontal alarm limit) and 50 m VAL (vertical alarm limit).

Keywords: EGNOS, GPS, positioning, integrity, protection level

Procedia PDF Downloads 208
378 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images

Authors: Sophia Shi

Abstract:

Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.

Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG

Procedia PDF Downloads 113
377 Computational Fluid Dynamics Modeling of Liquefaction of Wood and It's Model Components Using a Modified Multistage Shrinking-Core Model

Authors: K. G. R. M. Jayathilake, S. Rudra

Abstract:

Wood degradation in hot compressed water is modeled with a Computational Fluid Dynamics (CFD) code using cellulose, xylan, and lignin as model compounds. Model compounds are reacted under catalyst-free conditions in a temperature range from 250 to 370 °C. Using a simplified reaction scheme where water soluble products, methanol soluble products, char like compounds and gas are generated through intermediates with each model compound. A modified multistage shrinking core model is developed to simulate particle degradation. In the modified shrinking core model, each model compound is hydrolyzed in separate stages. Cellulose is decomposed to glucose/oligomers before producing degradation products. Xylan is decomposed through xylose and then to degradation products where lignin is decomposed into soluble products before producing the total guaiacol, organic carbon (TOC) and then char and gas. Hydrolysis of each model compound is used as the main reaction of the process. Diffusion of water monomers to the particle surface to initiate hydrolysis and dissolution of the products in water is given importance during the modeling process. In the developed model the temperature variation depends on the Arrhenius relationship. Kinetic parameters from the literature are used for the mathematical model. Meanwhile, limited initial fast reaction kinetic data limit the development of more accurate CFD models. Liquefaction results of the CFD model are analyzed and validated using the experimental data available in the literature where it shows reasonable agreement.

Keywords: computational fluid dynamics, liquefaction, shrinking-core, wood

Procedia PDF Downloads 102
376 A Rapid Assessment of the Impacts of COVID-19 on Overseas Labor Migration: Findings from Bangladesh

Authors: Vaiddehi Bansal, Ridhi Sahai, Kareem Kysia

Abstract:

Overseas labor migration is currently one of the most important contributors to the economy of Bangladesh and is a highly profitable form of labor for Gulf Cooperative Council (GCC) countries. In 2019, 700,159 migrant workers from Bangladeshtraveled abroad for employment. GCC countries are a major destination for Bangladeshi migrant workers, with Saudi Arabia being the most common destination for Bangladeshi migrant workers since 2016. Despite the high rate of migration between these countries every year, the OLR industry remains complex and often leaves migrants susceptible to human trafficking, forced labor, and modern slavery. While the prevalence of forced labor among Bangladeshi migrants in GCC countries is still unknown, the IOM estimates international migrant workers comprise one fourth of the victims of forced labor. Moreover, the onset of the global COVID-19 pandemic has exposed migrant workers to additional adverse situations, making them even more vulnerable to forced labor and health risks. This paper presents findings from a rapid assessment of the impacts of COVID-19 on OLR in Bangladesh, with an emphasis on the increased risk of forced labor among vulnerable migrant worker populations, particularly women.Rapid reviews are a useful approach to swiftly provide actionable evidence for informed decision-making during emergencies, such as the COVID-19 pandemic. The research team conducted semi-structured key information interviews (KIIs) with a range of stakeholders, including government officials, local NGOs, international organizations, migration researchers, and formal and informal recruiting agencies, to obtain insights on the multi-facted impacts of COVID-19 on the OLR sector. The research team also conducted a comprehensive review of available resources, including media articles, blogs, policy briefs, reports, white papers, and other online content, to triangulate findings from the KIIs. After screening for inclusion criteria, a total of 110 grey literature documents were included in the review. A total of 31 KIIs were conducted, data from which was transcribed and translated from Bangla to English, andanalyzed using a detailed codebook. Findings indicate that there was limited reintegration support for returnee migrants. Facing increasing amounts of debt, financial insecurity, and social discrimination, returnee migrants, were extremely vulnerable to forced labor and exploitation. Growing financial debt and limited job opportunities in their home country will likely push migrants to resort to unsafe migration channels. Evidence suggests that women, who are primarily domestic works in GCC countries, were exposed to increased risk of forced labor and workplace violence. Due to stay-at-home measures, women migrant workers were tasked with additional housekeeping working and subjected to longer work hours, wage withholding, and physical abuse. In Bangladesh, returnee women migrant workers also faced an increased risk of domestic violence.

Keywords: forced labor, migration, gender, human trafficking

Procedia PDF Downloads 96
375 An Analysis of Legal and Ethical Implications of Sports Doping in India

Authors: Prathyusha Samvedam, Hiranmaya Nanda

Abstract:

Doping refers to the practice of using drugs or practices that enhance an athlete's performance. This is a problem that occurs on a worldwide scale and compromises the fairness of athletic tournaments. There are rules that have been created on both the national and international levels in order to prevent doping. However, these rules sometimes contradict one another, and it is possible that they don't do a very good job of prohibiting people from using PEDs. This study will contend that India's inability to comply with specific Code criteria, as well as its failure to satisfy "best practice" standards established by other countries, demonstrates a lack of uniformity in the implementation of anti-doping regulations and processes among nations. Such challenges have the potential to undermine the validity of the anti-doping system, particularly in developing nations like India. This article on the legislative framework in India governing doping in sports is very important. To begin, doping in sports is a significant problem that affects the spirit of fair play and sportsmanship. Moreover, it has the potential to jeopardize the integrity of the sport itself. In addition, the research has the potential to educate policymakers, sports organizations, and other stakeholders about the current legal framework and how well it discourages doping in athletic competitions. This article is divided into four distinct sections. The first section offers an explanation of what doping is and provides some context about its development throughout time. Followed the role of anti-doping authorities and the responsibilities they perform are investigated. Case studies and the research technique that will be employed for the study are in the third section; finally, the results are presented in the last section. In conclusion, doping is a severe problem that endangers the honest competition that exists within sports.

Keywords: sports law, doping, NADA, WADA, performance enhancing drugs, anti-doping bill 2022

Procedia PDF Downloads 48
374 Evaluation of Heat Transfer and Entropy Generation by Al2O3-Water Nanofluid

Authors: Houda Jalali, Hassan Abbassi

Abstract:

In this numerical work, natural convection and entropy generation of Al2O3–water nanofluid in square cavity have been studied. A two-dimensional steady laminar natural convection in a differentially heated square cavity of length L, filled with a nanofluid is investigated numerically. The horizontal walls are considered adiabatic. Vertical walls corresponding to x=0 and x=L are respectively maintained at hot temperature, Th and cold temperature, Tc. The resolution is performed by the CFD code "FLUENT" in combination with GAMBIT as mesh generator. These simulations are performed by maintaining the Rayleigh numbers varied as 103 ≤ Ra ≤ 106, while the solid volume fraction varied from 1% to 5%, the particle size is fixed at dp=33 nm and a range of the temperature from 20 to 70 °C. We used models of thermophysical nanofluids properties based on experimental measurements for studying the effect of adding solid particle into water in natural convection heat transfer and entropy generation of nanofluid. Such as models of thermal conductivity and dynamic viscosity which are dependent on solid volume fraction, particle size and temperature. The average Nusselt number is calculated at the hot wall of the cavity in a different solid volume fraction. The most important results is that at low temperatures (less than 40 °C), the addition of nanosolids Al2O3 into water leads to a decrease in heat transfer and entropy generation instead of the expected increase, whereas at high temperature, heat transfer and entropy generation increase with the addition of nanosolids. This behavior is due to the contradictory effects of viscosity and thermal conductivity of the nanofluid. These effects are discussed in this work.

Keywords: entropy generation, heat transfer, nanofluid, natural convection

Procedia PDF Downloads 252
373 Innovative Technologies of Distant Spectral Temperature Control

Authors: Leonid Zhukov, Dmytro Petrenko

Abstract:

Optical thermometry has no alternative in many cases of industrial most effective continuous temperature control. Classical optical thermometry technologies can be used on available for pyrometers controlled objects with stable radiation characteristics and transmissivity of the intermediate medium. Without using temperature corrections, it is possible in the case of a “black” body for energy pyrometry and the cases of “black” and “grey” bodies for spectral ratio pyrometry or with using corrections – for any colored bodies. Consequently, with increasing the number of operating waves, optical thermometry possibilities to reduce methodical errors significantly expand. That is why, in recent 25-30 years, research works have been reoriented on more perfect spectral (multicolor) thermometry technologies. There are two physical material substances, i.e., substance (controlled object) and electromagnetic field (thermal radiation), to be operated in optical thermometry. Heat is transferred by radiation; therefore, radiation has the energy, entropy, and temperature. Optical thermometry was originating simultaneously with the developing of thermal radiation theory when the concept and the term "radiation temperature" was not used, and therefore concepts and terms "conditional temperatures" or "pseudo temperature" of controlled objects were introduced. They do not correspond to the physical sense and definitions of temperature in thermodynamics, molecular-kinetic theory, and statistical physics. Launched by the scientific thermometric society, discussion about the possibilities of temperature measurements of objects, including colored bodies, using the temperatures of their radiation is not finished. Are the information about controlled objects transferred by their radiation enough for temperature measurements? The positive and negative answers on this fundamental question divided experts into two opposite camps. Recent achievements of spectral thermometry develop events in her favour and don’t leave any hope for skeptics. This article presents the results of investigations and developments in the field of spectral thermometry carried out by the authors in the Department of Thermometry and Physics-Chemical Investigations. The authors have many-year’s of experience in the field of modern optical thermometry technologies. Innovative technologies of optical continuous temperature control have been developed: symmetric-wave, two-color compensative, and based on obtained nonlinearity equation of spectral emissivity distribution linear, two-range, and parabolic. Тhe technologies are based on direct measurements of physically substantiated and proposed by Prof. L. Zhukov, radiation temperatures with the next calculation of the controlled object temperature using this radiation temperatures and corresponding mathematical models. Тhe technologies significantly increase metrological characteristics of continuous contactless and light-guide temperature control in energy, metallurgical, ceramic, glassy, and other productions. For example, under the same conditions, the methodical errors of proposed technologies are less than the errors of known spectral and classical technologies in 2 and 3-13 times, respectively. Innovative technologies provide quality products obtaining at the lowest possible resource-including energy costs. More than 600 publications have been published on the completed developments, including more than 100 domestic patents, as well as 34 patents in Australia, Bulgaria, Germany, France, Canada, the USA, Sweden, and Japan. The developments have been implemented in the enterprises of USA, as well as Western Europe and Asia, including Germany and Japan.

Keywords: emissivity, radiation temperature, object temperature, spectral thermometry

Procedia PDF Downloads 81
372 Seismic Loss Assessment for Peruvian University Buildings with Simulated Fragility Functions

Authors: Jose Ruiz, Jose Velasquez, Holger Lovon

Abstract:

Peruvian university buildings are critical structures for which very little research about its seismic vulnerability is available. This paper develops a probabilistic methodology that predicts seismic loss for university buildings with simulated fragility functions. Two university buildings located in the city of Cusco were analyzed. Fragility functions were developed considering seismic and structural parameters uncertainty. The fragility functions were generated with the Latin Hypercube technique, an improved Montecarlo-based method, which optimizes the sampling of structural parameters and provides at least 100 reliable samples for every level of seismic demand. Concrete compressive strength, maximum concrete strain and yield stress of the reinforcing steel were considered as the key structural parameters. The seismic demand is defined by synthetic records which are compatible with the elastic Peruvian design spectrum. Acceleration records are scaled based on the peak ground acceleration on rigid soil (PGA) which goes from 0.05g to 1.00g. A total of 2000 structural models were considered to account for both structural and seismic variability. These functions represent the overall building behavior because they give rational information regarding damage ratios for defined levels of seismic demand. The university buildings show an expected Mean Damage Factor of 8.80% and 19.05%, respectively, for the 0.22g-PGA scenario, which was amplified by the soil type coefficient and resulted in 0.26g-PGA. These ratios were computed considering a seismic demand related to 10% of probability of exceedance in 50 years which is a requirement in the Peruvian seismic code. These results show an acceptable seismic performance for both buildings.

Keywords: fragility functions, university buildings, loss assessment, Montecarlo simulation, latin hypercube

Procedia PDF Downloads 119
371 Analysis of Force Convection in Bandung Triga Reactor Core Plate Types Fueled Using Coolod-N2

Authors: K. A. Sudjatmi, Endiah Puji Hastuti, Surip Widodo, Reinaldy Nazar

Abstract:

Any pretensions to stop the production of TRIGA fuel elements by TRIGA reactor fuel elements manufacturer should be anticipated by the operating agency of TRIGA reactor to replace the cylinder type fuel element with plate type fuel element, that available on the market. This away was performed the calculation on U3Si2Al fuel with uranium enrichment of 19.75% and a load level of 2.96 gU/cm3. Maximum power that can be operated on free convection cooling mode at the BANDUNG TRIGA reactor fuel plate was 600 kW. This study has been conducted thermalhydraulic characteristic calculation model of the reactor core power 2MW. BANDUNG TRIGA reactor core fueled plate type is composed of 16 fuel elements, 4 control elements and one irradiation facility which is located right in the middle of the core. The reactor core is cooled using a pump which is already available with flow rate 900 gpm. Analysis on forced convection cooling mode with flow from the top down from 10%, 20%, 30% and so on up to a 100% rate of coolant flow. performed using the COOLOD-N2 code. The calculations result showed that the 2 MW power with inlet coolant temperature at 37 °C and cooling rate percentage of 50%, then the coolant temperature, maximum cladding and meat respectively 64.96 oC, 124.81 oC, and 125.08 oC, DNBR (departure from nucleate boiling ratio)=1.23 and OFIR (onset of flow instability ratio)=1:00. The results are expected to be used as a reference for determining the power and cooling rate level of the BANDUNG TRIGA reactor core plate types fueled.

Keywords: TRIGA, COOLOD-N2, plate type fuel element, force convection, thermal hydraulic characteristic

Procedia PDF Downloads 276
370 A Content Analysis of Sustainability Reporting to Frame the Heterogeneity in Corporate Environment Sustainability Practices

Authors: Venkataraman Sankaranarayanan, Sougata Ray

Abstract:

While extant research has examined many aspects of differential corporate environmental outcomes and behavior, a holistic and integrated view of heterogeneity in corporate environment sustainability (CES) practices remains a puzzle to be fully unraveled – its extent and nature, its relationship to macro or micro level influences, or strategic orientations. Such a perspective would be meaningful for the field given notable strides in CES practices and the corporate social responsibility agenda over the last two decades, in the backdrop of altered global socio-political sensitivities and technological advances. To partly address this gap, this exploratory research adopted a content analysis approach to code patterns in the sustainability disclosures of the 160 largest global firms spread over 8 years. The sample of firms spanned seven industries, nine countries and three continents thereby presenting data rich and diverse enough in several dimensions to be representative of global heterogeneity in CES practices. Through a factor analysis of the coded data, four strategic CES orientations were extracted through the analysis, that effectively straddles most of the variation observed in current CES practices – one that seeks to reduce environmental damage on account of the firm’s operations, another that prioritizes minimalism, a third that focuses on broader ecological status quo, and a final one that champions the ‘business of green’, extending the CES agenda beyond the firm’s boundaries. These environment sustainability strategy orientations are further examined to elicit prominent patterns and explore plausible antecedents.

Keywords: corporate social responsibility, corporate sustainability, environmental management, heterogeneity, strategic orientation

Procedia PDF Downloads 312