Search results for: wavefield extrapolation
32 Provenance and Paleoweathering Conditions of Doganhisar Clay Beds
Authors: Mehmet Yavuz Huseyinca
Abstract:
The clay beds are located at the south-southeast of Doğanhisar and northwest of Konya in the Central Anatolia. In the scope of preliminary study, three types of samples were investigated including basement phyllite (Bp) overlain by the clay beds, weathered phyllite (Wp) and Doğanhisar clay (Dc). The Chemical Index of Alteration (CIA) values of Dc range from 81 to 88 with an average of 85. This value is higher than that of Post Archean Australian Shale (PAAS) and defines very intense chemical weathering in the source-area. On the other hand, the A-CN-K diagram indicates that Bp underwent high degree post-depositional K-metasomatism. The average reconstructed CIA value of the Bp prior to the K-metasomatism is mainly 81 which overlaps the CIA values of the Wp (83) and Dc (85). Similar CIA values indicate parallel weathering trends. Also, extrapolation of the samples back to the plagioclase-alkali feldspar line in the A-CN-K diagram suggests an identical provenance close to granite in composition. Hereby the weathering background of Dc includes two steps. First one is intense weathering process of a granitic source to Bp with post-depositional K-metasomatism and the latter is progressively weathering of Bp to premetasomatised conditions (formation of Wp) ending with Dc deposition.Keywords: clay beds, Doganhisar, provenance, weathering
Procedia PDF Downloads 30831 Interaction between Trapezoidal Hill and Subsurface Cavity under SH Wave Incidence
Authors: Yuanrui Xu, Zailin Yang, Yunqiu Song, Guanxixi Jiang
Abstract:
It is an important subject of seismology on the influence of local topography on ground motion during earthquake. In mountainous areas with complex terrain, the construction of the tunnel is often the most effective transportation scheme. In these projects, the local terrain can be simplified into hills with different shapes, and the underground tunnel structure can be regarded as a subsurface cavity. The presence of the subsurface cavity affects the strength of the rock mass and changes the deformation and failure characteristics. Moreover, the scattering of the elastic waves by underground structures usually interacts with local terrains, which leads to a significant influence on the surface displacement of the terrains. Therefore, it is of great practical significance to study the surface displacement of local terrains with underground tunnels in earthquake engineering and seismology. In this work, the region is divided into three regions by the method of region matching. By using the fractional Bessel function and Hankel function, the complex function method, and the wave function expansion method, the wavefield expression of SH waves is introduced. With the help of a constitutive relation between the displacement and the stress components, the hoop stress and radial stress is obtained subsequently. Then, utilizing the continuous condition at different region boundaries, the undetermined coefficients in wave fields are solved by the Fourier series expansion and truncation of the finite term. Finally, the validity of the method is verified, and the surface displacement amplitude is calculated. The surface displacement amplitude curve is discussed in the numerical results. The results show that different parameters, such as radius and buried depth of the tunnel, wave number, and incident angle of the SH wave, have a significant influence on the amplitude of surface displacement. For the underground tunnel, the increase of buried depth will make the response of surface displacement amplitude increases at first and then decreases. However, the increase of radius leads the response of surface displacement amplitude to appear an opposite phenomenon. The increase of SH wave number can enlarge the amplitude of surface displacement, and the change of incident angle can obviously affect the amplitude fluctuation.Keywords: method of region matching, scattering of SH wave, subsurface cavity, trapezoidal hill
Procedia PDF Downloads 13330 Developing a Web GIS Tool for the Evaluation of Soil Erosion of a Watershed
Authors: Y. Fekir, K. Mederbal, M. A. Hamadouche, D. Anteur
Abstract:
The soil erosion by water has become one of the biggest problems of the environment in the world, threatening the majority of countries. There are several models to evaluate erosion. These models are still a simplified representation of reality. They permit the analysis of complex systems, measurements are complementary to allow an extrapolation in time and space and may combine different factors. The empirical model of soil loss proposed by Wischmeier and Smith (Universal Soil Loss Equation), is widely used in many countries. He considers that erosion is a multiplicative function of five factors: rainfall erosivity (the R factor) the soil erodibility factor (K), topography (LS), the erosion control practices (P) and vegetation cover and agricultural practices (C). In this work, we tried to develop a tool based on Web GIS functionality to evaluate soil losses caused by erosion taking into account five factors. This tool allows the user to integrate all the data needed for the evaluation (DEM, Land use, rainfall ...) in the form of digital layers to calculate the five factors taken into account in the USLE equation (R, K, C, P, LS). Accordingly, and after treatment of the integrated data set, a map of the soil losses will be achieved as a result. We tested the proposed tool on a watershed basin located in the weste of Algeria where a dataset was collected and prepared.Keywords: USLE, erosion, web gis, Algeria
Procedia PDF Downloads 33029 Effect of Al2O3 Nanoparticles on Corrosion Behavior of Aluminum Alloy Fabricated by Powder Metallurgy
Authors: Muna Khethier Abbass, Bassma Finner Sultan
Abstract:
In this research the effect of Al2O3 nanoparticles on corrosion behavior of aluminum base alloy(Al-4.5wt%Cu-1.5wt%Mg) has been investigated. Nanocomopsites reinforced with variable contents of 1,3 & 5wt% of Al2O3 nanoparticles were fabricated using powder metallurgy. All samples were prepared from the base alloy powders under the best powder metallurgy processing conditions of 6 hr of mixing time , 450 MPa of compaction pressure and 560°C of sintering temperature. Density and micro hardness measurements, and electrochemical corrosion tests are performed for all prepared samples in 3.5wt%NaCl solution at room temperature using potentiostate instrument. It has been found that density and micro hardness of the nanocomposite increase with increasing of wt% Al2O3 nanoparticles to Al matrix. It was found from Tafel extrapolation method that corrosion rates of the nanocomposites reinforced with alumina nanoparticles were lower than that of base alloy. From results of corrosion test by potentiodynamic cyclic polarization method, it was found the pitting corrosion resistance improves with adding of Al2O3 nanoparticles . It was noticed that the pits disappear and the hysteresis loop disappears also from anodic polarization curve.Keywords: powder metallurgy, nano composites, Al-Cu-Mg alloy, electrochemical corrosion
Procedia PDF Downloads 46128 Prediction of Temperature Distribution during Drilling Process Using Artificial Neural Network
Authors: Ali Reza Tahavvor, Saeed Hosseini, Nazli Jowkar, Afshin Karimzadeh Fard
Abstract:
Experimental & numeral study of temperature distribution during milling process, is important in milling quality and tools life aspects. In the present study the milling cross-section temperature is determined by using Artificial Neural Networks (ANN) according to the temperature of certain points of the work piece and the points specifications and the milling rotational speed of the blade. In the present work, at first three-dimensional model of the work piece is provided and then by using the Computational Heat Transfer (CHT) simulations, temperature in different nods of the work piece are specified in steady-state conditions. Results obtained from CHT are used for training and testing the ANN approach. Using reverse engineering and setting the desired x, y, z and the milling rotational speed of the blade as input data to the network, the milling surface temperature determined by neural network is presented as output data. The desired points temperature for different milling blade rotational speed are obtained experimentally and by extrapolation method for the milling surface temperature is obtained and a comparison is performed among the soft programming ANN, CHT results and experimental data and it is observed that ANN soft programming code can be used more efficiently to determine the temperature in a milling process.Keywords: artificial neural networks, milling process, rotational speed, temperature
Procedia PDF Downloads 40527 Simulation of Hydrogenated Boron Nitride Nanotube’s Mechanical Properties for Radiation Shielding Applications
Authors: Joseph E. Estevez, Mahdi Ghazizadeh, James G. Ryan, Ajit D. Kelkar
Abstract:
Radiation shielding is an obstacle in long duration space exploration. Boron Nitride Nanotubes (BNNTs) have attracted attention as an additive to radiation shielding material due to B10’s large neutron capture cross section. The B10 has an effective neutron capture cross section suitable for low energy neutrons ranging from 10-5 to 104 eV and hydrogen is effective at slowing down high energy neutrons. Hydrogenated BNNTs are potentially an ideal nanofiller for radiation shielding composites. We use Molecular Dynamics (MD) Simulation via Material Studios Accelrys 6.0 to model the Young’s Modulus of Hydrogenated BNNTs. An extrapolation technique was employed to determine the Young’s Modulus due to the deformation of the nanostructure at its theoretical density. A linear regression was used to extrapolate the data to the theoretical density of 2.62g/cm3. Simulation data shows that the hydrogenated BNNTs will experience a 11% decrease in the Young’s Modulus for (6,6) BNNTs and 8.5% decrease for (8,8) BNNTs compared to non-hydrogenated BNNT’s. Hydrogenated BNNTs are a viable option as a nanofiller for radiation shielding nanocomposite materials for long range and long duration space exploration.Keywords: boron nitride nanotube, radiation shielding, young modulus, atomistic modeling
Procedia PDF Downloads 29726 Multi-Source Data Fusion for Urban Comprehensive Management
Authors: Bolin Hua
Abstract:
In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data
Procedia PDF Downloads 39325 Methylprednisolone Injection Did Not Inhibit Anti-Hbs Response Following Hepatitis B Vaccination in Mice
Authors: P. O. Ughachukwu, P. O. Okonkwo, P. C. Unekwe, J. O. Ogamba
Abstract:
Background: The prevalence of hepatitis B viral infection is high worldwide with liver cirrhosis and hepatocellular carcinoma as important complications. Cases of poor antibody response to hepatitis B vaccination abound. Immunosuppression, especially from glucocorticoids, is often cited as a cause of poor antibody response and there are documented evidences of irrational administration of glucocorticoids to children and adults. The study was, therefore, designed to find out if administration of glucocorticoids affects immune response to vaccination against hepatitis B in mice. Methods: Mice of both sexes were randomly divided into 2 groups. Daily intramuscular methylprednisolone injections, (15 mg kg-1), were given to the test group while sterile deionized water (0.1ml) was given to control mice for 30 days. On day 6 all mice were given 2 μg (0.1ml) hepatitis B vaccine and a booster dose on day 27. On day 34, blood samples were collected and analyzed for anti-HBs titres using enzyme-linked immunosorbent assay (ELISA). Statistical analysis was done using Graph Pad Prism 5.0 and the results taken as statistically significant at p value < 0.05. Results: There were positive serum anti-HBs responses in all mice groups but the differences in titres were not statistically significant. Conclusions: At the dosages and length of exposure used in this study, methylprednisolone injection did not significantly inhibit anti-HBs response in mice following immunization against hepatitis B virus. By extrapolation, methylprednisolone, when used in the usual clinical doses and duration of therapy, is not likely to inhibit immune response to hepatitis B vaccinations in man.Keywords: anti-HBs, hepatitis B vaccine, immune response, methylprednisolone, mice
Procedia PDF Downloads 32324 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia
Authors: Halefom Kidane
Abstract:
This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.Keywords: artificial neural networks, forecasting, min-max normalization, wind speed
Procedia PDF Downloads 7523 An Eulerian Method for Fluid-Structure Interaction Simulation Applied to Wave Damping by Elastic Structures
Authors: Julien Deborde, Thomas Milcent, Stéphane Glockner, Pierre Lubin
Abstract:
A fully Eulerian method is developed to solve the problem of fluid-elastic structure interactions based on a 1-fluid method. The interface between the fluid and the elastic structure is captured by a level set function, advected by the fluid velocity and solved with a WENO 5 scheme. The elastic deformations are computed in an Eulerian framework thanks to the backward characteristics. We use the Neo Hookean or Mooney Rivlin hyperelastic models and the elastic forces are incorporated as a source term in the incompressible Navier-Stokes equations. The velocity/pressure coupling is solved with a pressure-correction method and the equations are discretized by finite volume schemes on a Cartesian grid. The main difficulty resides in that large deformations in the fluid cause numerical instabilities. In order to avoid these problems, we use a re-initialization process for the level set and linear extrapolation of the backward characteristics. First, we verify and validate our approach on several test cases, including the benchmark of FSI proposed by Turek. Next, we apply this method to study the wave damping phenomenon which is a mean to reduce the waves impact on the coastline. So far, to our knowledge, only simulations with rigid or one dimensional elastic structure has been studied in the literature. We propose to place elastic structures on the seabed and we present results where 50 % of waves energy is absorbed.Keywords: damping wave, Eulerian formulation, finite volume, fluid structure interaction, hyperelastic material
Procedia PDF Downloads 32322 Development of GIS-Based Geotechnical Guidance Maps for Prediction of Soil Bearing Capacity
Authors: Q. Toufeeq, R. Kauser, U. R. Jamil, N. Sohaib
Abstract:
Foundation design of a structure needs soil investigation to avoid failures due to settlements. This soil investigation is expensive and time-consuming. Developments of new residential societies involve huge leveling of large sites that is accompanied by heavy land filling. Poor practices of land fill for deep depths cause differential settlements and consolidations of underneath soil that sometimes result in the collapse of structures. The extent of filling remains unknown to the individual developer unless soil investigation is carried out. Soil investigation cannot be performed on each available site due to involved costs. However, fair estimate of bearing capacity can be made if such tests are already done in the surrounding areas. The geotechnical guidance maps can provide a fair assessment of soil properties. Previously, GIS-based approaches have been used to develop maps using extrapolation and interpolations techniques for bearing capacities, underground recharge, soil classification, geological hazards, landslide hazards, socio-economic, and soil liquefaction mapping. Standard penetration test (SPT) data of surrounding sites were already available. Google Earth is used for digitization of collected data. Few points were considered for data calibration and validation. Resultant Geographic information system (GIS)-based guidance maps are helpful to anticipate the bearing capacity in the real estate industry.Keywords: bearing capacity, soil classification, geographical information system, inverse distance weighted, radial basis function
Procedia PDF Downloads 13521 Agile Implementation of 'PULL' Principles in a Manufacturing Process Chain for Aerospace Composite Parts
Authors: Torsten Mielitz, Dietmar Schulz, York C. Roth
Abstract:
Market forecasts show a significant increase in the demand for aircraft within the next two decades and production rates will be adapted accordingly. Improvements and optimizations in the industrial system are becoming more important to cope with future challenges in manufacturing and assembly. Highest quality standards have to be met for aerospace parts, whereas cost effective production in industrial systems and methodologies are also a key driver. A look at other industries like e.g., automotive shows well established processes to streamline existing manufacturing systems. In this paper, the implementation of 'PULL' principles in an existing manufacturing process chain for a large scale composite part is presented. A nonlinear extrapolation based on 'Little's Law' showed a risk of a significant increase of parts needed in the process chain to meet future demand. A project has been set up to mitigate the risk whereas the methodology has been changed from a traditional milestone approach in the beginning towards an agile way of working in the end in order to facilitate immediate benefits in the shop-floor. Finally, delivery rates could be increased avoiding more semi-finished parts in the process chain (work in progress & inventory) by the successful implementation of the 'PULL' philosophy in the shop-floor between the work stations. Lessons learned during the running project as well as implementation and operations phases are discussed in order to share best practices.Keywords: aerospace composite part manufacturing, PULL principles, shop-floor implementation, lessons learned
Procedia PDF Downloads 17220 Analysis, Evaluation and Optimization of Food Management: Minimization of Food Losses and Food Wastage along the Food Value Chain
Authors: G. Hafner
Abstract:
A method developed at the University of Stuttgart will be presented: ‘Analysis, Evaluation and Optimization of Food Management’. A major focus is represented by quantification of food losses and food waste as well as their classification and evaluation regarding a system optimization through waste prevention. For quantification and accounting of food, food losses and food waste along the food chain, a clear definition of core terms is required at the beginning. This includes their methodological classification and demarcation within sectors of the food value chain. The food chain is divided into agriculture, industry and crafts, trade and consumption (at home and out of home). For adjustment of core terms, the authors have cooperated with relevant stakeholders in Germany for achieving the goal of holistic and agreed definitions for the whole food chain. This includes modeling of sub systems within the food value chain, definition of terms, differentiation between food losses and food wastage as well as methodological approaches. ‘Food Losses’ and ‘Food Wastes’ are assigned to individual sectors of the food chain including a description of the respective methods. The method for analyzing, evaluation and optimization of food management systems consist of the following parts: Part I: Terms and Definitions. Part II: System Modeling. Part III: Procedure for Data Collection and Accounting Part. IV: Methodological Approaches for Classification and Evaluation of Results. Part V: Evaluation Parameters and Benchmarks. Part VI: Measures for Optimization. Part VII: Monitoring of Success The method will be demonstrated at the example of an invesigation of food losses and food wastage in the Federal State of Bavaria including an extrapolation of respective results to quantify food wastage in Germany.Keywords: food losses, food waste, resource management, waste management, system analysis, waste minimization, resource efficiency
Procedia PDF Downloads 40519 Chemical Life Cycle Alternative Assessment as a Green Chemical Substitution Framework: A Feasibility Study
Authors: Sami Ayad, Mengshan Lee
Abstract:
The Sustainable Development Goals (SDGs) were designed to be the best possible blueprint to achieve peace, prosperity, and overall, a better and more sustainable future for the Earth and all its people, and such a blueprint is needed more than ever. The SDGs face many hurdles that will prevent them from becoming a reality, one of such hurdles, arguably, is the chemical pollution and unintended chemical impacts generated through the production of various goods and resources that we consume. Chemical Alternatives Assessment has proven to be a viable solution for chemical pollution management in terms of filtering out hazardous chemicals for a greener alternative. However, the current substitution practice lacks crucial quantitative datasets (exposures and life cycle impacts) to ensure no unintended trade-offs occur in the substitution process. A Chemical Life Cycle Alternative Assessment (CLiCAA) framework is proposed as a reliable and replicable alternative to Life Cycle Based Alternative Assessment (LCAA) as it integrates chemical molecular structure analysis and Chemical Life Cycle Collaborative (CLiCC) web-based tool to fill in data gaps that the former frameworks suffer from. The CLiCAA framework consists of a four filtering layers, the first two being mandatory, with the final two being optional assessment and data extrapolation steps. Each layer includes relevant impact categories of each chemical, ranging from human to environmental impacts, that will be assessed and aggregated into unique scores for overall comparable results, with little to no data. A feasibility study will demonstrate the efficiency and accuracy of CLiCAA whilst bridging both cancer potency and exposure limit data, hoping to provide the necessary categorical impact information for every firm possible, especially those disadvantaged in terms of research and resource management.Keywords: chemical alternative assessment, LCA, LCAA, CLiCC, CLiCAA, chemical substitution framework, cancer potency data, chemical molecular structure analysis
Procedia PDF Downloads 9218 Non-Destructive Test of Bar for Determination of Critical Compression Force Directed towards the Pole
Authors: Boris Blostotsky, Elia Efraim
Abstract:
The phenomenon of buckling of structural elements under compression is revealed in many cases of loading and found consideration in many structures and mechanisms. In the present work the method and results of dynamic test for buckling of bar loaded by a compression force directed towards the pole are considered. Experimental determination of critical force for such system has not been made previously. The tested object is a bar with semi-rigid connection to the base at one of its ends, and with a hinge moving along a circle at the other. The test includes measuring the natural frequency of the bar at different values of compression load. The lateral stiffness is calculated based on natural frequency and reduced mass on the bar's movable end. The critical load is determined by extrapolation the values of lateral stiffness up to zero value. For the experimental investigation the special test-bed was created that allows the stability testing at positive and negative curvature of the movable end's trajectory, as well as varying the rotational stiffness of the other end connection. Decreasing a friction at the movable end allows extend the diapason of applied compression force. The testing method includes: - Methodology of the experiment planning, that allows determine the required number of tests under various loads values in the defined range and the type of extrapolating function; - Methodology of experimental determination of reduced mass at the bar's movable end including its own mass; - Methodology of experimental determination of lateral stiffness of uncompressed bar rotational semi-rigid connection at the base. For planning the experiment and for comparison of the experimental results with the theoretical values of critical load, the analytical dependencies of lateral stiffness of the bar with defined end conditions on compression load. In the particular case of perfectly rigid connection of the bar to the base, the critical load value corresponds to solution by S.P. Timoshenko. Correspondence of the calculated and experimental values was obtained.Keywords: non-destructive test, buckling, dynamic method, semi-rigid connections
Procedia PDF Downloads 35517 Dynamic Test for Stability of Bar Loaded by a Compression Force Directed Towards the Pole
Authors: Elia Efraim, Boris Blostotsky
Abstract:
The phenomenon of buckling of structural elements under compression is revealed in many cases of loading and found consideration in many structures and mechanisms. In the present work the method and results of dynamic test for buckling of bar loaded by a compression force directed towards the pole are considered. Experimental determination of critical force for such system has not been made previously. The tested object is a bar with semi-rigid connection to the base at one of its ends, and with a hinge moving along a circle at the other. The test includes measuring the natural frequency of the bar at different values of compression load. The lateral stiffness is calculated based on natural frequency and reduced mass on the bar's movable end. The critical load is determined by extrapolation the values of lateral stiffness up to zero value. For the experimental investigation the special test-bed was created that allows the stability testing at positive and negative curvature of the movable end's trajectory, as well as varying the rotational stiffness of the other end connection. Decreasing a friction at the movable end allows extend the diapason of applied compression force. The testing method includes : - methodology of the experiment planning, that allows determine the required number of tests under various loads values in the defined range and the type of extrapolating function; - methodology of experimental determination of reduced mass at the bar's movable end including its own mass; - methodology of experimental determination of lateral stiffness of uncompressed bar rotational semi-rigid connection at the base. For planning the experiment and for comparison of the experimental results with the theoretical values of critical load, the analytical dependencies of lateral stiffness of the bar with defined end conditions on compression load. In the particular case of perfectly rigid connection of the bar to the base, the critical load value corresponds to solution by S.P. Timoshenko. Correspondence of the calculated and experimental values was obtained.Keywords: buckling, dynamic method, end-fixity factor, force directed towards a pole
Procedia PDF Downloads 35016 Insect Diversity Potential in Olive Trees in Two Orchards Differently Managed Under an Arid Climate in the Western Steppe Land, Algeria
Authors: Samir Ali-arous, Mohamed Beddane, Khaled Djelouah
Abstract:
This study investigated the insect diversity of olive (Olea europaea Linnaeus (Oleaceae)) groves grown in an arid climate in Algeria. In this context, several sampling methods were used within two orchards differently managed. Fifty arthropod species belonging to diverse orders and families were recorded. Hymenopteran species were quantitatively the most abundant, followed by species associated with Heteroptera, Aranea, Coleoptera and Homoptera orders. Regarding functional feeding groups, phytophagous species were dominant in the weeded and the unweeded orchard; however, higher abundance was recorded in the weeded site. Predators were ranked second, and pollinators were more frequent in the unweeded olive orchard. Two-factor Anova with repeated measures had revealed high significant effect of the weed management system, measures repetition and interaction with measurement repetition on arthropod’s abundances (P < 0.05). Likewise, generalized linear models showed that N/S ratio varied significantly between the two weed management approaches, in contrast, the remaining diversity indices including the Shannon index H’ had no significant correlation. Moreover, diversity parameters of arthropod’s communities in each agro-system highlighted multiples significant correlations (P <0.05). Rarefaction and extrapolation (R/E) sampling curves, evidenced that the survey and monitoring carried out in both sites had a optimum coverage of entomofauna present including scarce and transient species. Overall, calculated diversity and similarity indices were greater in the unweeded orchard than in the weeded orchard, demonstrating spontaneous flora's key role in entomofaunal diversity. Principal Component Analysis (PCA) has defined correlations between arthropod’s abundances and naturally occurring plants in olive orchards, including beneficials.Keywords: Algeria, olive, insects, diversity, wild plants
Procedia PDF Downloads 7515 An Investigation into the Crystallization Tendency/Kinetics of Amorphous Active Pharmaceutical Ingredients: A Case Study with Dipyridamole and Cinnarizine
Authors: Shrawan Baghel, Helen Cathcart, Biall J. O'Reilly
Abstract:
Amorphous drug formulations have great potential to enhance solubility and thus bioavailability of BCS class II drugs. However, the higher free energy and molecular mobility of the amorphous form lowers the activation energy barrier for crystallization and thermodynamically drives it towards the crystalline state which makes them unstable. Accurate determination of the crystallization tendency/kinetics is the key to the successful design and development of such systems. In this study, dipyridamole (DPM) and cinnarizine (CNZ) has been selected as model compounds. Thermodynamic fragility (m_T) is measured from the heat capacity change at the glass transition temperature (Tg) whereas dynamic fragility (m_D) is evaluated using methods based on extrapolation of configurational entropy to zero 〖(m〗_(D_CE )), and heating rate dependence of Tg 〖(m〗_(D_Tg)). The mean relaxation time of amorphous drugs was calculated from Vogel-Tammann-Fulcher (VTF) equation. Furthermore, the correlation between fragility and glass forming ability (GFA) of model drugs has been established and the relevance of these parameters to crystallization of amorphous drugs is also assessed. Moreover, the crystallization kinetics of model drugs under isothermal conditions has been studied using Johnson-Mehl-Avrami (JMA) approach to determine the Avrami constant ‘n’ which provides an insight into the mechanism of crystallization. To further probe into the crystallization mechanism, the non-isothermal crystallization kinetics of model systems was also analysed by statistically fitting the crystallization data to 15 different kinetic models and the relevance of model-free kinetic approach has been established. In addition, the crystallization mechanism for DPM and CNZ at each extent of transformation has been predicted. The calculated fragility, glass forming ability (GFA) and crystallization kinetics is found to be in good correlation with the stability prediction of amorphous solid dispersions. Thus, this research work involves a multidisciplinary approach to establish fragility, GFA and crystallization kinetics as stability predictors for amorphous drug formulations.Keywords: amorphous, fragility, glass forming ability, molecular mobility, mean relaxation time, crystallization kinetics, stability
Procedia PDF Downloads 35414 Dynamic Environmental Impact Study during the Construction of the French Nuclear Power Plants
Authors: A. Er-Raki, D. Hartmann, J. P. Belaud, S. Negny
Abstract:
This paper has a double purpose: firstly, a literature review of the life cycle analysis (LCA) and secondly a comparison between conventional (static) LCA and multi-level dynamic LCA on the following items: (i) inventories evolution with time (ii) temporal evolution of the databases. The first part of the paper summarizes the state of the art of the static LCA approach. The different static LCA limits have been identified and especially the non-consideration of the spatial and temporal evolution in the inventory, for the characterization factors (FCs) and into the databases. Then a description of the different levels of integration of the notion of temporality in life cycle analysis studies was made. In the second part, the dynamic inventory has been evaluated firstly for a single nuclear plant and secondly for the entire French nuclear power fleet by taking into account the construction durations of all the plants. In addition, the databases have been adapted by integrating the temporal variability of the French energy mix. Several iterations were used to converge towards the real environmental impact of the energy mix. Another adaptation of the databases to take into account the temporal evolution of the market data of the raw material was made. An identification of the energy mix of the time studied was based on an extrapolation of the production reference values of each means of production. An application to the construction of the French nuclear power plants from 1971 to 2000 has been performed, in which a dynamic inventory of raw material has been evaluated. Then the impacts were characterized by the ILCD 2011 characterization method. In order to compare with a purely static approach, a static impact assessment was made with the V 3.4 Ecoinvent data sheets without adaptation and a static inventory considering that all the power stations would have been built at the same time. Finally, a comparison between static and dynamic LCA approaches was set up to determine the gap between them for each of the two levels of integration. The results were analyzed to identify the contribution of the evolving nuclear power fleet construction to the total environmental impacts of the French energy mix during the same period. An equivalent strategy using a dynamic approach will further be applied to identify the environmental impacts that different scenarios of the energy transition could bring, allowing to choose the best energy mix from an environmental viewpoint.Keywords: LCA, static, dynamic, inventory, construction, nuclear energy, energy mix, energy transition
Procedia PDF Downloads 10513 Localized Variabilities in Traffic-related Air Pollutant Concentrations Revealed Using Compact Sensor Networks
Authors: Eric A. Morris, Xia Liu, Yee Ka Wong, Greg J. Evans, Jeff R. Brook
Abstract:
Air quality monitoring stations tend to be widely distributed and are often located far from major roadways, thus, determining where, when, and which traffic-related air pollutants (TRAPs) have the greatest impact on public health becomes a matter of extrapolation. Compact, multipollutant sensor systems are an effective solution as they enable several TRAPs to be monitored in a geospatially dense network, thus filling in the gaps between conventional monitoring stations. This work describes two applications of one such system named AirSENCE for gathering actionable air quality data relevant to smart city infrastructures. In the first application, four AirSENCE devices were co-located with traffic monitors around the perimeter of a city block in Oshawa, Ontario. This study, which coincided with the COVID-19 outbreak of 2020 and subsequent lockdown measures, demonstrated a direct relationship between decreased traffic volumes and TRAP concentrations. Conversely, road construction was observed to cause elevated TRAP levels while reducing traffic volumes, illustrating that conventional smart city sensors such as traffic counters provide inadequate data for inferring air quality conditions. The second application used two AirSENCE sensors on opposite sides of a major 2-way commuter road in Toronto. Clear correlations of TRAP concentrations with wind direction were observed, which shows that impacted areas are not necessarily static and may exhibit high day-to-day variability in air quality conditions despite consistent traffic volumes. Both of these applications provide compelling evidence favouring the inclusion of air quality sensors in current and future smart city infrastructure planning. Such sensors provide direct measurements that are useful for public health alerting as well as decision-making for projects involving traffic mitigation, heavy construction, and urban renewal efforts.Keywords: distributed sensor network, continuous ambient air quality monitoring, Smart city sensors, Internet of Things, traffic-related air pollutants
Procedia PDF Downloads 7212 MAOD Is Estimated by Sum of Contributions
Authors: David W. Hill, Linda W. Glass, Jakob L. Vingren
Abstract:
Maximal accumulated oxygen deficit (MAOD), the gold standard measure of anaerobic capacity, is the difference between the oxygen cost of exhaustive severe intensity exercise and the accumulated oxygen consumption (O2; mL·kg–1). In theory, MAOD can be estimated as the sum of independent estimates of the phosphocreatine and glycolysis contributions, which we refer to as PCr+glycolysis. Purpose: The purpose was to test the hypothesis that PCr+glycolysis provides a valid measure of anaerobic capacity in cycling and running. Methods: The participants were 27 women (mean ± SD, age 22 ±1 y, height 165 ± 7 cm, weight 63.4 ± 9.7 kg) and 25 men (age 22 ± 1 y, height 179 ± 6 cm, weight 80.8 ± 14.8 kg). They performed two exhaustive cycling and running tests, at speeds and work rates that were tolerable for ~5 min. The rate of oxygen consumption (VO2; mL·kg–1·min–1) was measured in warmups, in the tests, and during 7 min of recovery. Fingerprick blood samples obtained after exercise were analysed to determine peak blood lactate concentration (PeakLac). The VO2 response in exercise was fitted to a model, with a fast ‘primary’ phase followed by a delayed ‘slow’ component, from which was calculated the accumulated O2 and the excess O2 attributable to the slow component. The VO2 response in recovery was fitted to a model with a fast phase and slow component, sharing a common time delay. Oxygen demand (in mL·kg–1·min–1) was determined by extrapolation from steady-state VO2 in warmups; the total oxygen cost (in mL·kg–1) was determined by multiplying this demand by time to exhaustion and adding the excess O2; then, MAOD was calculated as total oxygen cost minus accumulated O2. The phosphocreatine contribution (area under the fast phase of the post-exercise VO2) and the glycolytic contribution (converted from PeakLac) were summed to give PCr+glycolysis. There was not an interaction effect involving sex, so values for anaerobic capacity were examined using a two-way ANOVA, with repeated measures across method (PCr+glycolysis vs MAOD) and mode (cycling vs running). Results: There was a significant effect only for exercise mode. There was no difference between MAOD and PCr+glycolysis: values were 59 ± 6 mL·kg–1 and 61 ± 8 mL·kg–1 in cycling and 78 ± 7 mL·kg–1 and 75 ± 8 mL·kg–1 in running. Discussion: PCr+glycolysis is a valid measure of anaerobic capacity in cycling and running, and it is as valid for women as for men.Keywords: alactic, anaerobic, cycling, ergometer, glycolysis, lactic, lactate, oxygen deficit, phosphocreatine, running, treadmill
Procedia PDF Downloads 13611 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition
Authors: Habtamu Garoma Debela, Gemechis File Duressa
Abstract:
In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent
Procedia PDF Downloads 14310 Evaluation of Antidiabetic Activity of a Combination Extract of Nigella Sativa & Cinnamomum Cassia in Streptozotocin Induced Type-I Diabetic Rats
Authors: Ginpreet Kaur, Mohammad Yasir Usmani, Mohammed Kamil Khan
Abstract:
Diabetes mellitus is a disease with a high global burden and results in significant morbidity and mortality. In India, the number of people suffering with diabetes is expected to rise from 19 to 57 million in 2025. At present, interest in herbal remedies is growing to reduce the side effects associated with conventional dosage form like oral hypoglycemic agents and insulin for the treatment of diabetes mellitus. Our aim was to investigate the antidiabetic activities of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats. Thus, the present study was undertaken to screen postprandial glucose excursion potential through α- glucosidase inhibitory activity (In Vitro) and effect of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats (In Vivo). In addition changes in body weight, plasma glucose, lipid profile and kidney profile were also determined. The IC50 values for both extract and Acarbose was calculated by extrapolation method. Combinatorial extract of N. sativa & C. cassia at different dosages (100 and 200 mg/kg orally) and Metformin (50 mg/kg orally) as the standard drug was administered for 28 days and then biochemical estimation, body weights and OGTT (Oral glucose tolerance test) were determined. Histopathological studies were also performed on kidney and pancreatic tissue. In In-Vitro the combinatorial extract shows much more inhibiting effect than the individual extracts. The results reveals that combinatorial extract of N. sativa & C. cassia has shown significant decrease in plasma glucose (p<0.0001), total cholesterol and LDL levels when compared with the STZ group The decreasing level of BUN and creatinine revealed the protection of N. sativa & C. cassia extracts against nephropathy associated with diabetes. Combination of N. sativa & C. cassia significantly improved glucose tolerance to exogenously administered glucose (2 g/kg) after 60, 90 and 120 min interval on OGTT in high dose streptozotocin induced diabetic rats compared with the untreated control group. Histopathological studies shown that treatment with N. sativa & C. cassia extract alone and in combination restored pancreatic tissue integrity and was able to regenerate the STZ damaged pancreatic β cells. Thus, the present study reveals that combination of N. sativa & C. cassia extract has significant α- glucosidase inhibitory activity and thus has great potential as a new source for diabetes treatment.Keywords: lipid levels, OGTT, diabetes, herbs, glucosidase
Procedia PDF Downloads 4309 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 2248 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 3197 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems
Authors: Georgi Y. Georgiev, Matthew Brouillet
Abstract:
This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.Keywords: complexity, self-organization, agent based modelling, efficiency
Procedia PDF Downloads 686 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 2195 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania
Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea
Abstract:
A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality
Procedia PDF Downloads 1284 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.Keywords: heterophony, modalism, serialism, synchrony, syntax
Procedia PDF Downloads 3443 An Investigation on the Suitability of Dual Ion Beam Sputtered GMZO Thin Films: For All Sputtered Buffer-Less Solar Cells
Authors: Vivek Garg, Brajendra S. Sengar, Gaurav Siddharth, Nisheka Anadkat, Amitesh Kumar, Shailendra Kumar, Shaibal Mukherjee
Abstract:
CuInGaSe (CIGSe) is the dominant thin film solar cell technology. The band alignment of Buffer/CIGSe interface is one of the most crucial parameters for solar cell performance. In this article, the valence band offset (VBOff) and conduction band offset (CBOff) values of Cu(In0.70Ga0.30)Se/ 1 at.% Ga: Mg0.25Zn0.75O (GMZO) heterojunction, grown by dual ion beam sputtering system (DIBS), are calculated to understand the carrier transport mechanism at the heterojunction for the realization of all sputtered buffer-less solar cells. To determine the valence band offset (VBOff), ∆E_V at GMZO/CIGSe heterojunction interface, the standard method based on core-level photoemission is utilized. The value of ∆E_V can be evaluated by considering common core-level peaks. In our study, the values of (Valence band onset)VBOn, obtained by linear extrapolation method for GMZO and CIGSe films are calculated to be 2.86 and 0.76 eV. In the UPS spectra peak positions of Se 3d is observed in UPS spectra at 54.82 and 54.7 eV for CIGSe film and GMZO/CIGSe interface respectively, while the peak position of Mg 2p is observed at 50.09 and 50.12 eV for GMZO and GMZO/CIGSe interface respectively. The optical band gap of CIGSe and GMZO are obtained from absorption spectra procured from spectroscopic ellipsometry are 1.26 and 3.84 eV respectively. The calculated average values of ∆E_v and ∆E_C are estimated to be 2.37 and 0.21 eV, respectively, at room temperature. The calculated positive conduction band offset termed as a spike at the absorber junction is the required criterion for the high-efficiency solar cells for the efficient charge extraction from the junction. So we can conclude that the above study confirms GMZO thin films grown by the dual ion beam sputtering system are the suitable candidate for the CIGSe thin films based ultra-thin buffer-less solar cells. We investigated the band-offset properties at the GMZO/CIGSe heterojunction to verify the suitability of the GMZO for the realization of the buffer-less solar cells. The calculated average values of ∆E_V and ∆E_C are estimated to be 2.37 and 0.21 eV, respectively, at room temperature. The calculated positive conduction band offset termed as a spike at the absorber junction is the required criterion for the high-efficiency solar cells for the efficient charge extraction from the junction. So we can conclude that the above study confirms GMZO thin films grown by the dual ion beam sputtering system are the suitable candidate for the CIGSe thin films based ultra-thin buffer-less solar cells. Acknowledgment: We are thankful to DIBS, EDX, and XRD facility equipped at Sophisticated Instrument Centre (SIC) at IIT Indore. The authors B.S.S and A.K acknowledge CSIR and V.G acknowledge UGC, India for their fellowships. B.S.S is thankful to DST and IUSSTF for BASE Internship Award. Prof. Shaibal Mukherjee is thankful to DST and IUSSTF for BASE Fellowship and MEITY YFRF award. This work is partially supported by DAE BRNS, DST CERI, and DST-RFBR Project under India-Russia Programme of Cooperation in Science and Technology. We are thankful to Mukul Gupta for SIMS facility equipped at UGC-DAE Indore.Keywords: CIGSe, DIBS, GMZO, solar cells, UPS
Procedia PDF Downloads 278