Search results for: space vector pulse wide modulation
424 Ethanolamine Detection with Composite Films
Authors: S. A. Krutovertsev, A. E. Tarasova, L. S. Krutovertseva, O. M. Ivanova
Abstract:
The aim of the work was to get stable sensitive films with good sensitivity to ethanolamine (C2H7NO) in air. Ethanolamine is used as adsorbent in different processes of gas purification and separation. Besides it has wide industrial application. Chemical sensors of sorption type are widely used for gas analysis. Their behavior is determined by sensor characteristics of sensitive sorption layer. Forming conditions and characteristics of chemical gas sensors based on nanostructured modified silica films activated by different admixtures have been studied. As additives molybdenum containing polyoxometalates of the eighteen series were incorporated in silica films. The method of hydrolythic polycondensation from tetraethyl orthosilicate solutions was used for forming such films in this work. The method’s advantage is a possibility to introduce active additives directly into an initial solution. This method enables to obtain sensitive thin films with high specific surface at room temperature. Particular properties make polyoxometalates attractive as active additives for forming of gas-sensitive films. As catalyst of different redox processes, they can either accelerate the reaction of the matrix with analyzed gas or interact with it, and it results in changes of matrix’s electrical properties Polyoxometalates based films were deposited on the test structures manufactured by microelectronic planar technology with interdigitated electrodes. Modified silica films were deposited by a casting method from solutions based on tetraethyl orthosilicate and polyoxometalates. Polyoxometalates were directly incorporated into initial solutions. Composite nanostructured films were deposited by drop casting method on test structures with a pair of interdigital metal electrodes formed at their surface. The sensor’s active area was 4.0 x 4.0 mm, and electrode gap was egual 0.08 mm. Morphology of the layers surface were studied with Solver-P47 scanning probe microscope (NT-MDT, Russia), the infrared spectra were investigated by a Bruker EQUINOX 55 (Germany). The conditions of film formation varied during the tests. Electrical parameters of the sensors were measured electronically in real-time mode. Films had highly developed surface with value of 450 m2/g and nanoscale pores. Thickness of them was 0,2-0,3 µm. The study shows that the conditions of the environment affect markedly the sensors characteristics, which can be improved by choosing of the right procedure of forming and processing. Addition of polyoxometalate into silica film resulted in stabilization of film mass and changed markedly of electrophysical characteristics. Availability of Mn3P2Mo18O62 into silica film resulted in good sensitivity and selectivity to ethanolamine. Sensitivity maximum was observed at weight content of doping additive in range of 30–50% in matrix. With ethanolamine concentration changing from 0 to 100 ppm films’ conductivity increased by 10-12 times. The increase of sensor’s sensitivity was received owing to complexing reaction of tested substance with cationic part of polyoxometalate. This fact results in intramolecular redox reaction which sharply change electrophysical properties of polyoxometalate. This process is reversible and takes place at room temperature.Keywords: ethanolamine, gas analysis, polyoxometalate, silica film
Procedia PDF Downloads 211423 Characteristics of Plasma Synthetic Jet Actuator in Repetitive Working Mode
Authors: Haohua Zong, Marios Kotsonis
Abstract:
Plasma synthetic jet actuator (PSJA) is a new concept of zero net mass flow actuator which utilizes pulsed arc/spark discharge to rapidly pressurize gas in a small cavity under constant-volume conditions. The unique combination of high exit jet velocity (>400 m/s) and high actuation frequency (>5 kHz) provides a promising solution for high-speed high-Reynolds-number flow control. This paper focuses on the performance of PSJA in repetitive working mode which is more relevant to future flow control applications. A two-electrodes PSJA (cavity volume: 424 mm3, orifice diameter: 2 mm) together with a capacitive discharge circuit (discharge energy: 50 mJ-110 mJ) is designed to enable repetitive operation. Time-Resolved Particle Imaging Velocimetry (TR-PIV) system working at 10 kHz is exploited to investigate the influence of discharge frequency on performance of PSJA. In total, seven cases are tested, covering a wide range of discharge frequencies (20 Hz-560 Hz). The pertinent flow features (shock wave, vortex ring and jet) remain the same for single shot mode and repetitive working mode. Shock wave is issued prior to jet eruption. Two distinct vortex rings are formed in one cycle. The first one is produced by the starting jet whereas the second one is related with the shock wave reflection in cavity. A sudden pressure rise is induced at the throat inlet by the reflection of primary shock wave, promoting the shedding of second vortex ring. In one cycle, jet exit velocity first increases sharply, then decreases almost linearly. Afterwards, an alternate occurrence of multiple jet stages and refresh stages is observed. By monitoring the dynamic evolution of exit velocity in one cycle, some integral performance parameters of PSJA can be deduced. As frequency increases, the jet intensity in steady phase decreases monotonically. In the investigated frequency range, jet duration time drops from 250 µs to 210 µs and peak jet velocity decreases from 53 m/s to approximately 39 m/s. The jet impulse and the expelled gas mass (0.69 µN∙s and 0.027 mg at 20 Hz) decline by 48% and 40%, respectively. However, the electro-mechanical efficiency of PSJA defined by the ratio of jet mechanical energy to capacitor energy doesn’t show significant difference (o(0.01%)). Fourier transformation of the temporal exit velocity signal indicates two dominant frequencies. One corresponds to the discharge frequency, while the other accounts for the alternation frequency of jet stage and refresh stage in one cycle. The alternation period (300 µs approximately) is independent of discharge frequency, and possibly determined intrinsically by the actuator geometry. A simple analytical model is established to interpret the alternation of jet stage and refresh stage. Results show that the dynamic response of exit velocity to a small-scale disturbance (jump in cavity pressure) can be treated as a second-order under-damping system. Oscillation frequency of the exit velocity, namely alternation frequency, is positively proportional to exit area, but inversely proportional to cavity volume and throat length. Theoretical value of alternation period (305 µs) agrees well with the experimental value.Keywords: plasma, synthetic jet, actuator, frequency effect
Procedia PDF Downloads 254422 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 148421 Diagenesis of the Permian Ecca Sandstones and Mudstones, in the Eastern Cape Province, South Africa: Implications for the Shale Gas Potential of the Karoo Basin
Authors: Temitope L. Baiyegunhi, Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Diagenesis is the most important factor that affects or impact the reservoir property. Despite the fact that published data gives a vast amount of information on the geology, sedimentology and lithostratigraphy of the Ecca Group in the Karoo Basin of South Africa, little is known of the diagenesis of the potentially feasible shales and sandstones of the Ecca Group. The study aims to provide a general account of the diagenesis of sandstones and mudstone of the Ecca Group. Twenty-five diagenetic textures and structures are identified and grouped into three regimes or stages that include eogenesis, mesogenesis and telogenesis. Clay minerals are the most common cementing materials in the Ecca sandstones and mudstones. Smectite, kaolinite and illite are the major clay minerals that act as pore lining rims and pore-filling cement. Most of the clay minerals and detrital grains were seriously attacked and replaced by calcite. Calcite precipitates locally in pore spaces and partly or completely replaced feldspar and quartz grains, mostly at their margins. Precipitation of cements and formation of pyrite and authigenic minerals as well as little lithification occurred during the eogenesis. This regime was followed by mesogenesis which brought about an increase in tightness of grain packing, loss of pore spaces and thinning of beds due to weight of overlying sediments and selective dissolution of framework grains. Compaction, mineral overgrowths, mineral replacement, clay-mineral authigenesis, deformation and pressure solution structures occurred during mesogenesis. During rocks were uplifted, weathered and unroofed by erosion, this resulted in additional grain fracturing, decementation and oxidation of iron-rich volcanic fragments and ferromagnesian minerals. The rocks of Ecca Group were subjected to moderate-intense mechanical and chemical compaction during its progressive burial. Intergranular pores, matrix micro pores, secondary intragranular, dissolution and fractured pores are the observed pores. The presence of fractured and dissolution pores tend to enhance reservoir quality. However, the isolated nature of the pores makes them unfavourable producers of hydrocarbons, which at best would require stimulation. The understanding of the space and time distribution of diagenetic processes in these rocks will allow the development of predictive models of their quality, which may contribute to the reduction of risks involved in their exploration.Keywords: diagenesis, reservoir quality, Ecca Group, Karoo Supergroup
Procedia PDF Downloads 149420 Implementing Urban Rainwater Harvesting Systems: Between Policy and Practice
Authors: Natàlia Garcia Soler, Timothy Moss
Abstract:
Despite the multiple benefits of sustainable urban drainage, as demonstrated in numerous case studies across the world, urban rainwater harvesting techniques are generally restricted to isolated model projects. The leap from niche to mainstream has, in most cities, proved an elusive goal. Why policies promoting rainwater harvesting are limited in their widespread implementation has seldom been subjected to systematic analysis. Much of the literature on the policy, planning and institutional contexts of these techniques focus either on their potential benefits or on project design, but very rarely on a critical-constructive analysis of past experiences of implementation. Moreover, the vast majority of these contributions are restricted to single-case studies. There is a dearth of knowledge with respect to, firstly, policy implementation processes and, secondly, multi-case analysis. Insights from both, the authors argue, are essential to inform more effective rainwater harvesting in cities in the future. This paper presents preliminary findings from a research project on rainwater harvesting in cities from a social science perspective that is funded by the Swedish Research Foundation (Formas). This project – UrbanRain – is examining the challenges and opportunities of mainstreaming rainwater harvesting in three European cities. The paper addresses two research questions: firstly, what lessons can be learned on suitable policy incentives and planning instruments for rainwater harvesting from a meta-analysis of the relevant international literature and, secondly, how far these lessons are reflected in a study of past and ongoing rainwater harvesting projects in a European forerunner city. This two-tier approach frames the structure of the paper. We present, first, the results of the literature analysis on policy and planning issues of urban rainwater harvesting. Here, we analyze quantitatively and qualitatively the literature of the past 15 years on this topic in terms of thematic focus, issues addressed and key findings and draw conclusions on research gaps, highlighting the need for more studies on implementation factors, actor interests, institutional adaptation and multi-level governance. In a second step we focus in on the experiences of rainwater harvesting in Berlin and present the results of a mapping exercise on a wide variety of projects implemented there over the last 30 years. Here, we develop a typology to characterize the rainwater harvesting projects in terms of policy issues (what problems and goals are targeted), project design (which kind of solutions are envisaged), project implementation (how and when they were implemented), location (whether they are in new or existing urban developments) and actors (which stakeholders are involved and how), paying particular attention to the shifting institutional framework in Berlin. Mapping and categorizing these projects is based on a combination of document analysis and expert interviews. The paper concludes by synthesizing the findings, identifying how far the goals, governance structures and instruments applied in the Berlin projects studied reflect the findings emerging from the meta-analysis of the international literature on policy and planning issues of rainwater harvesting and what implications these findings have for mainstreaming such techniques in future practice.Keywords: institutional framework, planning, policy, project implementation, urban rainwater management
Procedia PDF Downloads 287419 The 10,000 Fold Effect of Retrograde Neurotransmission, a New Concept for Stroke Revival: Use of Intracarotid Sodium Nitroprusside
Authors: Vinod Kumar
Abstract:
Background: Tissue Plasminogen Activator (tPA) showed a level 1 benefit in acute stroke (within 3-6 hrs). Intracarotid sodium nitroprusside (ICSNP) has been studied in this context with a wide treatment window, fast recovery and affordability. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a)Retrograde Neurotransmission (acute cases): 1)Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2)Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b)Vasospasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c)Long-Term Potentıatıon (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Des’gn: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat stroke cases. Case-control prospective study. Mater’als And Methods: The experimental population included 82 stroke patients (10 patients were given control treatments without superfusion or with 5% dextrose superfusion, and 72 patients comprised the ICSNP group). The mean time for superfusion was 9.5 days post-stroke. Pre- and post-ICSNP status was monitored by NIHSS, MRI and TCD. Results: After 90 seconds in the ICSNP group, the mean change in the NIHSS score was a decrease of 1.44 points, or 6.55%; after 2 h, there was a decrease of 1.16 points; after 24 h, there was an increase of 0.66 points, 2.25%, compared to the control-group increase of 0.7 points, or 3.53%; at 7 days, there was an 8.61-point decrease, 44.58%, compared to the control-group increase of 2.55 points, or 22.37%; at 2 months in ICSNP, there was a 6.94-points decrease, 62.80%, compared to the control-group decrease of 2.77 points, or 8.78%. TCD was documented and improvements were noted. Conclusions: ICSNP is a swift-acting drug in the treatment of stroke, acting within 90 seconds on day 9.5 post-stroke with a small decrease after 24 hours. The drug recovers from this decrease quickly.Keywords: brain infarcts, intracarotid sodium nitroprusside, perforators, vasodilatıons, retrograde transmission, the 10, 000-fold effect
Procedia PDF Downloads 309418 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224417 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X
Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira
Abstract:
An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.Keywords: boundary-layer, scramjet, simple algorithm, shock wave
Procedia PDF Downloads 491416 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 190415 Benefits of High Power Impulse Magnetron Sputtering (HiPIMS) Method for Preparation of Transparent Indium Gallium Zinc Oxide (IGZO) Thin Films
Authors: Pavel Baroch, Jiri Rezek, Michal Prochazka, Tomas Kozak, Jiri Houska
Abstract:
Transparent semiconducting amorphous IGZO films have attracted great attention due to their excellent electrical properties and possible utilization in thin film transistors or in photovoltaic applications as they show 20-50 times higher mobility than that of amorphous silicon. It is also known that the properties of IGZO films are highly sensitive to process parameters, especially to oxygen partial pressure. In this study we have focused on the comparison of properties of transparent semiconducting amorphous indium gallium zinc oxide (IGZO) thin films prepared by conventional sputtering methods and those prepared by high power impulse magnetron sputtering (HiPIMS) method. Furthermore we tried to optimize electrical and optical properties of the IGZO thin films and to investigate possibility to apply these coatings on thermally sensitive flexible substrates. We employed dc, pulsed dc, mid frequency sine wave and HiPIMS power supplies for magnetron deposition. Magnetrons were equipped with sintered ceramic InGaZnO targets. As oxygen vacancies are considered to be the main source of the carriers in IGZO films, it is expected that with the increase of oxygen partial pressure number of oxygen vacancies decreases which results in the increase of film resistivity. Therefore in all experiments we focused on the effect of oxygen partial pressure, discharge power and pulsed power mode on the electrical, optical and mechanical properties of IGZO thin films and also on the thermal load deposited to the substrate. As expected, we have observed a very fast transition between low- and high-resistivity films depending on oxygen partial pressure when deposition using conventional sputtering methods/power supplies have been utilized. Therefore we established and utilized HiPIMS sputtering system for enlargement of operation window for better control of IGZO thin film properties. It is shown that with this system we are able to effectively eliminate steep transition between low and high resistivity films exhibited by DC mode of sputtering and the electrical resistivity can be effectively controlled in the wide resistivity range of 10-² to 10⁵ Ω.cm. The highest mobility of charge carriers (up to 50 cm2/V.s) was obtained at very low oxygen partial pressures. Utilization of HiPIMS also led to significant decrease in thermal load deposited to the substrate which is beneficial for deposition on the thermally sensitive and flexible polymer substrates. Deposition rate as a function of discharge power and oxygen partial pressure was also systematically investigated and the results from optical, electrical and structure analysis will be discussed in detail. Most important result which we have obtained demonstrates almost linear control of IGZO thin films resistivity with increasing of oxygen partial pressure utilizing HiPIMS mode of sputtering and highly transparent films with low resistivity were prepared already at low pO2. It was also found that utilization of HiPIMS technique resulted in significant improvement of surface smoothness in reactive mode of sputtering (with increasing of oxygen partial pressure).Keywords: charge carrier mobility, HiPIMS, IGZO, resistivity
Procedia PDF Downloads 297414 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context
Authors: Andrea Fiorista
Abstract:
The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL
Procedia PDF Downloads 87413 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 35412 Listening Children Through Storytelling
Authors: Catarina Cruz, Ana Breda
Abstract:
In the early years, until the children’s entrance at the elementary school, they are stimulated by their educators, through rich and attractive contexts, to explore and develop skills in different domains, from the socio-emotional to the cognitive. Many of these contexts trigger real or imaginary situations, familiar or not, through resources or pedagogical practices that incite children's curiosity, questioning, expression of ideas or emotions, social interaction, among others. Later, when children enter at the elementary school, their activity at school becomes more focused on developing skills in the cognitive domain, namely acquiring learning from different subject areas, such as Mathematics, Natural Sciences, History, among others. That is, to ensure that children develop the standardized learning recommended in the guiding curriculum documents, they spend part of their time applying formulas, memorizing information, following instructions, and so on, and in this way not much time is left to listen children, to learn about their interests and likes, as well as their perspective and questions about the surround world. In Elementary School, especially in the 1st Cycle, children are naturally curious, however, sometimes this skill is subtly conditioned by adults. Curious children learn more, since they have an intrinsic desire to know more, especially about what is unknown. When children think on subjects or themes that they are interested in or curious about, they attribute more meaning to this learning and retain it for longer. Therefore, it is important to approach subjects in the classroom that seduce or captivate children's attention, trigger them curiosity, and allow to hear their ideas. There are several resources, strategies and pedagogical practices to awaken children's curiosity, to explore their knowledge, to understand their perspectives and their way of thinking, to know a little more about their personality and to provide space for dialogue. The storytelling, its narrative’s exploration and interpretation is one of those pedagogical practices. Children’s literature, about real or imaginary subjects, stimulate children’s insights supported into their experiences, emotions, learnings and personality, and promote opportunities for children express freely their feelings and thoughts. This work focuses on a session developed with children in the 3rd year of schooling, from a Portuguese 1st Cycle Basic School, in which the story "From the Outside In and From the Inside Out" was presented. The story’s presentation was mainly centred on children’s activity, who read excerpts and interpreted/explored them through a dialogue led by one of the authors. The study presented here intends to show an example of how an exploration of a children's story can trigger ideas, thoughts, emotions or attitudes in children in the 3rd year of elementary school. To answer the research question, this work aimed to: identify ideas, thoughts, emotions or attitudes that emerged from the exploration of story; analyse aspects of the story and the orchestration/conduction of dialogue with/between children that facilitated or inhibited the emergence of ideas, thoughts, emotions or attitudes by children,Keywords: storytelling, children’s perspectives, soft skills, non-formal learning contexts, orchestration
Procedia PDF Downloads 24411 An Ecological Approach to Understanding Student Absenteeism in a Suburban, Kansas School
Authors: Andrew Kipp
Abstract:
Student absenteeism is harmful to both the school and the absentee student. One approach to improving student absenteeism is targeting contextual factors within the students’ learning environment. However, contemporary literature has not taken an ecological agency approach to understanding student absenteeism. Ecological agency is a theoretical framework that magnifies the interplay between the environment and the actions of people within the environment. To elaborate, the person’s personal history and aspirations and the environmental conditions provide potential outlets or restrictions to their intended action. The framework provides the unique perspective of understanding absentee students’ decision-making through the affordances and constraints found in their learning environment. To that effect, the study was guided by the question, “Why do absentee students decide to engage in absenteeism in a suburban Kansas school?” A case study methodology was used to answer the research question. Four suburban, Kansas high school absentee students in the 2020-2021 school year were selected for the study. The fall 2020 semester was in a remote learning setting, and the spring 2021 semester was in an in-person learning setting. The study captured their decision-making with respect to school attendance throughsemi-structured interviews, prolonged observations, drawings, and concept maps. The data was analyzed through thematic analysis. The findings revealed that peer socialization opportunities, methods of instruction, shifts in cultural beliefs due to COVID-19, manifestations of anxiety and lack of space to escape their anxiety, social media bullying, and the inability to receive academic tutoring motivated the participants’ daily decision to either attend or miss school. The findings provided a basis to improve several institutional and classroom practices. These practices included more student-led instruction and less teacher-led instruction in both in-person and remote learning environments, promoting socialization through classroom collaboration and clubs based on emerging student interests, reducing instances of bullying through prosocial education, safe spaces for students to escape the classroom to manage their anxiety, and more opportunities for one-on-one tutoring to improve grades. The study provides an example of using the ecological agency approach to better understand the personal and environmental factors that lead to absenteeism. The study also informs educational policies and classroom practices to better promote student attendance. Further research should investigate other school contexts using the ecological agency theoretical framework to better understand the influence of the school environment on student absenteeism.Keywords: student absenteeism, ecological agency, classroom practices, educational policy, student decision-making
Procedia PDF Downloads 144410 PARP1 Links Transcription of a Subset of RBL2-Dependent Genes with Cell Cycle Progression
Authors: Ewelina Wisnik, Zsolt Regdon, Kinga Chmielewska, Laszlo Virag, Agnieszka Robaszkiewicz
Abstract:
Apart from protecting genome, PARP1 has been documented to regulate many intracellular processes inter alia gene transcription by physically interacting with chromatin bound proteins and by their ADP-ribosylation. Our recent findings indicate that expression of PARP1 decreases during the differentiation of human CD34+ hematopoietic stem cells to monocytes as a consequence of differentiation-associated cell growth arrest and formation of E2F4-RBL2-HDAC1-SWI/SNF repressive complex at the promoter of this gene. Since the RBL2 complexes repress genes in a E2F-dependent manner and are widespread in the genome in G0 arrested cells, we asked (a) if RBL2 directly contributes to defining monocyte phenotype and function by targeting gene promoters and (b) if RBL2 controls gene transcription indirectly by repressing PARP1. For identification of genes controlled by RBL2 and/or PARP1,we used primer libraries for surface receptors and TLR signaling mediators, genes were silenced by siRNA or shRNA, analysis of gene promoter occupation by selected proteins was carried out by ChIP-qPCR, while statistical analysis in GraphPad Prism 5 and STATISTICA, ChIP-Seq data were analysed in Galaxy 2.5.0.0. On the list of 28 genes regulated by RBL2, we identified only four solely repressed by RBL2-E2F4-HDAC1-BRM complex. Surprisingly, 24 out of 28 emerged genes controlled by RBL2 were co-regulated by PARP1 in six different manners. In one mode of RBL2/PARP1 co-operation, represented by MAP2K6 and MAPK3, PARP1 was found to associate with gene promoters upon RBL2 silencing, which was previously shown to restore PARP1 expression in monocytes. PARP1 effect on gene transcription was observed only in the presence of active EP300, which acetylated gene promoters and activated transcription. Further analysis revealed that PARP1 binding to MA2K6 and MAPK3 promoters enabled recruitment of EP300 in monocytes, while in proliferating cancer cell lines, which actively transcribe PARP1, this protein maintained EP300 at the promoters of MA2K6 and MAPK3. Genome-wide analysis revealed a similar distribution of PARP1 and EP300 around transcription start sites and the co-occupancy of some gene promoters by PARP1 and EP300 in cancer cells. Here, we described a new RBL2/PARP1/EP300 axis which controls gene transcription regardless of the cell type. In this model cell, cycle-dependent transcription of PARP1 regulates expression of some genes repressed by RBL2 upon cell cycle limitation. Thus, RBL2 may indirectly regulate transcription of some genes by controlling the expression of EP300-recruiting PARP1. Acknowledgement: This work was financed by Polish National Science Centre grants nr DEC-2013/11/D/NZ2/00033 and DEC-2015/19/N/NZ2/01735. L.V. is funded by the National Research, Development and Innovation Office grants GINOP-2.3.2-15-2016-00020 TUMORDNS, GINOP-2.3.2-15-2016-00048-STAYALIVE and OTKA K112336. AR is supported by Polish Ministry of Science and Higher Education 776/STYP/11/2016.Keywords: retinoblastoma transcriptional co-repressor like 2 (RBL2), poly(ADP-ribose) polymerase 1 (PARP1), E1A binding protein p300 (EP300), monocytes
Procedia PDF Downloads 210409 How Does Paradoxical Leadership Enhance Organizational Success?
Authors: Wageeh A. Nafei
Abstract:
This paper explores the role of Paradoxical Leadership (PL) in enhancing Organizational Success (OS) at private hospitals in Egypt. Based on the collected data from employees in private hospitals (doctors, nursing staff, and administrative staff). The researcher has adopted a sampling method to collect data for the study. The appropriate statistical methods, such as Alpha Correlation Coefficient (ACC), Confirmatory Factor Analysis (CFA), and Multiple Regression Analysis (MRA), are used to analyze the data and test the hypotheses. The research has reached a number of results, the most important of which are (1) there is a statistical relationship between the independent variable represented by PL and the dependent variable represented by Organizational Success (OS). The paradoxical leader encourages employees to express their opinions and builds a work environment characterized by flexibility and independence. Also, the paradoxical leader works to support specialized work teams, which leads to the creation of new ideas, on the one hand, and contributes to the achievement of outstanding performance on the other hand. (2) the mentality of the paradoxical leader is flexible and capable of absorbing all suggestions from all employees. Also, the paradoxical leader is interested in enhancing cooperation among them and provides an opportunity to transfer experience and increase knowledge-sharing. Also, the sharing of knowledge creates the necessary diversity that helps the organization to obtain rich external information and enables the organization to deal with a rapidly changing environment. (3) The PL approach helps in facing the paradoxical demands of employees. A paradoxical leader plays an important role in reducing the feeling of instability in the work environment and lack of job security, reducing negative feelings for employees, restoring balance in the work environment, improving the well-being of employees, and increasing the degree of job satisfaction of employees in the organization. The study referred to a number of recommendations, the most important of which are (1) the leaders of the organizations must listen to the views of employees and their needs and move away from the official method of control. The leader should give sufficient freedom to employees to participate in decision-making and maintain enough space among them. The treatment between the leaders and employees must be based on friendliness, (2) the need for organizational leaders to pay attention to sharing knowledge among employees through training courses. The leader should make sure that every information provided by the employee is valuable and useful, which can be used to solve a problem that may face his/her colleagues at work, (3) the need for organizational leaders to pay attention to sharing knowledge among employees through brainstorming sessions. The leader should ensure that employees obtain knowledge from their colleagues and share ideas and information among them. This is in addition to motivating employees to complete their work in a new creative way, which leads to employees’ not feeling bored of repeating the same routine procedures in the organization.Keywords: paradoxical leadership, organizational success, human resourece, management
Procedia PDF Downloads 58408 Distributional and Developmental Analysis of PM2.5 in Beijing, China
Authors: Alexander K. Guo
Abstract:
PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.Keywords: Beijing, distribution, patterns, pm2.5, trends
Procedia PDF Downloads 245407 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 312406 Road Systems as Environmental Barriers: An Overview of Roadways in Their Function as Fences for Wildlife Movement
Authors: Rachael Bentley, Callahan Gergen, Brodie Thiede
Abstract:
Roadways have a significant impact on the environment in so far as they function as barriers to wildlife movement, both through road mortality and through resultant road avoidance. Roads have an im-mense presence worldwide, and it is predicted to increase substantially in the next thirty years. As roadways become even more common, it is important to consider their environmental impact, and to mitigate the negative effects which they have on wildlife and wildlife mobility. In a thorough analysis of several related studies, a common conclusion was that roads cause habitat fragmentation, which can lead split populations to evolve differently, for better or for worse. Though some populations adapted positively to roadways, becoming more resistant to road mortality, and more tolerant to noise and chemical contamination, many others experienced maladaptation, either due to chemical contamination in and around their environment, or because of genetic mutations from inbreeding when their population was fragmented too substantially to support a large enough group for healthy genetic exchange. Large mammals were especially susceptible to maladaptation from inbreed-ing, as they require larger areas to roam and therefore require even more space to sustain a healthy population. Regardless of whether a species evolved positively or negatively as a result of their proximity to a road, animals tended to avoid roads, making the genetic diversity from habitat fragmentation an exceedingly prevalent issue in the larger discussion of road ecology. Additionally, the consideration of solu-tions, such as overpasses and underpasses, is crucial to ensuring the long term survival of many wildlife populations. In studies addressing the effectiveness of overpasses and underpasses, it seemed as though animals adjusted well to these sorts of solutions, but strategic place-ment, as well as proper sizing, proper height, shelter from road noise, and other considerations were important in construction. When an underpass or overpass was well-built and well-shielded from human activity, animals’ usage of the structure increased significantly throughout its first five years, thus reconnecting previously divided populations. Still, these structures are costly and they are often unable to fully address certain issues such as light, noise, and contaminants from vehicles. Therefore, the need for further discussion of new, crea-tive solutions remains paramount. Roads are one of the most consistent and prominent features of today’s landscape, but their environmental impacts are largely overlooked. While roads are useful for connecting people, they divide landscapes and animal habitats. Therefore, further research and investment in possible solutions is necessary to mitigate the negative effects which roads have on wildlife mobility and to pre-vent issues from resultant habitat fragmentation.Keywords: fences, habitat fragmentation, roadways, wildlife mobility
Procedia PDF Downloads 179405 Tectonics of Out-of-Sequence Thrusting in NW Himachal Himalaya, India
Authors: Rajkumar Ghosh
Abstract:
Jhakri Thrust (JT), Sarahan Thrust (ST), and Chaura Thrust (CT) are the three OOST along Jakhri-Chaura segment along the Sutlej river valley in Himachal Pradesh. CT is deciphered only by Apatite Fission Track dating. Such geochronological information is not currently accessible for the Jhakri and Sarahan thrusts. JT was additionally validated as OOST without any dating. The described rock types include ductile sheared gneisses and upper greenschist-amphibolite facies metamorphosed schists. Locally, the Munsiari (Jutogh) Thrust is referred to as the JT. Brittle shear, the JT, borders the research area's southern and ductile shear, the CT, and its northern margins. The JT has a 50° western dip and is south-westward verging. It is 15–17 km deep. A progressive rise in strain towards the JT zone based on microstructural tests was observed by previous researchers. The high-temperature ranges of the MCT root zone are cited in the current work as supportive evidence for the ductile nature of the OOST. In Himachal Pradesh, the lithological boundaries for OOST are not set. In contrast, the Sarahan thrust is NW-SE striking and 50-80 m wide. ST and CT are probably equivalent and marked by a sheared biotite-chlorite matrix with a top-to-SE kinematic indicator. It is inferred from cross-section balancing that the CT is folded with this anticlinorium. These thrust systems consist of several branches, some of which are still active. The thrust system exhibits complex internal geometry consisting of box folds, boudins, scar folds, crenulation cleavages, kink folds, and tension gashes. Box folds are observed on the hanging wall of the Chaura thrust. The ductile signature of CT represents steepen downward of the thrust. After the STDSU stopped deformation, out-of-sequence thrust was initiated in some sections of the Higher Himalaya. A part of GHC and part of the LH is thrust southwestward along the Jutogh Thrust/Munsiari Thrust/JT as also the Jutogh Nappe. The CT is concealed beneath Jutogh Thrust sheet hence the basal part of GHC is unexposed to the surface in Sutlej River section. Fieldwork and micro-structural studies of the Greater Himalayan Crystalline (GHC) along the Sutlej section reveal (a) initial top-to-SW sense of ductile shearing (CT); (b) brittle-ductile extension (ST); and (c) uniform top-to-SW sense of brittle shearing (JT). A group of samples of schistose rock from Jutogh Group of Greater Himalayan Crystalline and Quartzite from Rampur Group of Lesser Himalayan Crystalline were analyzed. No such physiographic transition in that area is to determine a break in the landscape due to OOST. OOSTs from GHC are interpreted mainly from geochronological studies to date, but proper field evidence is missing. Apart from minimal documentation in geological mapping for OOST, there exists a lack of suitable exposure of rock to generalize the features of OOST in the field in NW Higher Himalaya. Multiple sets of thrust planes may be activated within this zone or a zone along which OOST is engaged.Keywords: out-of-sequence thrust, main central thrust, grain boundary migration, South Tibetan detachment system, Jakhri Thrust, Sarahan Thrust, Chaura Thrust, higher Himalaya, greater Himalayan crystalline
Procedia PDF Downloads 71404 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 92403 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 181402 Chemopreventive Efficacy of Andrographolide in Rat Colon Carcinogenesis Model Using Aberrant Crypt Foci (ACF) as Endpoint Marker
Authors: Maryam Hajrezaie, Mahmood Ameen Abdulla, Nazia Abdul Majid, Hapipa Mohd Ali, Pouya Hassandarvish, Maryam Zahedi Fard
Abstract:
Background: Colon cancer is one of the most prevalent cancers in the world and is the third leading cause of death among cancers in both males and females. The incidence of colon cancer is ranked fourth among all cancers but varies in different parts of the world. Cancer chemoprevention is defined as the use of natural or synthetic compounds capable of inducing biological mechanisms necessary to preserve genomic fidelity. Andrographolide is the major labdane diterpenoidal constituent of the plant Andrographis paniculata (family Acanthaceae), used extensively in the traditional medicine. Extracts of the plant and their constituents are reported to exhibit a wide spectrum of biological activities of therapeutic importance. Laboratory animal model studies have provided evidence that Andrographolide play a role in inhibiting the risk of certain cancers. Objective: Our aim was to evaluate the chemopreventive efficacy of the Andrographolide in the AOM induced rat model. Methods: To evaluate inhibitory properties of andrographolide on colonic aberrant crypt foci (ACF), five groups of 7-week-old male rats were used. Group 1 (control group) were fed with 10% Tween 20 once a day, Group 2 (cancer control) rats were intra-peritoneally injected with 15 mg/kg Azoxymethan, Gropu 3 (drug control) rats were injected with 15 mg/kg azoxymethan and 5-Flourouracil, Group 4 and 5 (experimental groups) were fed with 10 and 20 mg/kg andrographolide each once a day. After 1 week, the treatment group rats received subcutaneous injections of azoxymethane, 15 mg/kg body weight, once weekly for 2 weeks. Control rats were continued on Tween 20 feeding once a day and experimental groups 10 and 20 mg/kg andrographolide feeding once a day for 8 weeks. All rats were sacrificed 8 weeks after the azoxymethane treatment. Colons were evaluated grossly and histopathologically for ACF. Results: Administration of 10 mg/kg and 20 mg/kg andrographolide were found to be effectively chemoprotective, as evidenced microscopily and biochemically. Andrographolide suppressed total colonic ACF formation up to 40% to 60%, respectively, when compared with control group. Pre-treatment with andrographolide, significantly reduced the impact of AOM toxicity on plasma protein and urea levels as well as on plasma aspartate aminotransferase (AST), alanine aminotransferase (ALT), lactate dehydrogenase (LDH) and gamma-glutamyl transpeptidase (GGT) activities. Grossly, colorectal specimens revealed that andrographolide treatments decreased the mean score of number of crypts in AOM-treated rats. Importantly, rats fed andrographolide showed 75% inhibition of foci containing four or more aberrant crypts. The results also showed a significant increase in glutathione (GSH), superoxide dismutase (SOD), nitric oxide (NO), and Prostaglandin E2 (PGE2) activities and a decrease in malondialdehyde (MDA) level. Histologically all treatment groups showed a significant decrease of dysplasia as compared to control group. Immunohistochemical staining showed up-regulation of Hsp70 and down-regulation of Bax proteins. Conclusion: The current study demonstrated that Andrographolide reduce the number of ACF. According to these data, Andrographolide might be a promising chemoprotective activity, in a model of AOM-induced in ACF.Keywords: chemopreventive, andrographolide, colon cancer, aberrant crypt foci (ACF)
Procedia PDF Downloads 429401 Application of a Submerged Anaerobic Osmotic Membrane Bioreactor Hybrid System for High-Strength Wastewater Treatment and Phosphorus Recovery
Authors: Ming-Yeh Lu, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu
Abstract:
Recently, anaerobic membrane bioreactors (AnMBRs) has been widely utilized, which combines anaerobic biological treatment process and membrane filtration, that can be present an attractive option for wastewater treatment and water reuse. Conventional AnMBR is having several advantages, such as improving effluent quality, compact space usage, lower sludge yield, without aeration and production of energy. However, the removal of nitrogen and phosphorus in the AnMBR permeate was negligible which become the biggest disadvantage. In recent years, forward osmosis (FO) is an emerging technology that utilizes osmotic pressure as driving force to extract clean water without additional external pressure. The pore size of FO membrane is kindly mentioned the pore size, so nitrogen or phosphorus could effectively improve removal of nitrogen or phosphorus. Anaerobic bioreactor with FO membrane (AnOMBR) can retain the concentrate organic matters and nutrients. However, phosphorus is a non-renewable resource. Due to the high rejection property of FO membrane, the high amount of phosphorus could be recovered from the combination of AnMBR and FO. In this study, development of novel submerged anaerobic osmotic membrane bioreactor integrated with periodic microfiltration (MF) extraction for simultaneous phosphorus and clean water recovery from wastewater was evaluated. A laboratory-scale AnOMBR utilizes cellulose triacetate (CTA) membranes with effective membrane area of 130 cm² was fully submerged into a 5.5 L bioreactor at 30-35℃. Active layer-facing feed stream orientation was utilized, for minimizing fouling and scaling. Additionally, a peristaltic pump was used to circulate draw solution (DS) at a cross flow velocity of 0.7 cm/s. Magnesium sulphate (MgSO₄) solution was used as DS. Microfiltration membrane periodically extracted about 1 L solution when the TDS reaches to 5 g/L to recover phosphorus and simultaneous control the salt accumulation in the bioreactor. During experiment progressed, the average water flux was achieved around 1.6 LMH. The AnOMBR process show greater than 95% removal of soluble chemical oxygen demand (sCOD), nearly 100% of total phosphorous whereas only partial removal of ammonia, and finally average methane production of 0.22 L/g sCOD was obtained. Therefore, AnOMBR system periodically utilizes MF membrane extracted for phosphorus recovery with simultaneous pH adjustment. The overall performance demonstrates that a novel submerged AnOMBR system is having potential for simultaneous wastewater treatment and resource recovery from wastewater, and hence, the new concept of this system can be used to replace for conventional AnMBR in the future.Keywords: anaerobic treatment, forward osmosis, phosphorus recovery, membrane bioreactor
Procedia PDF Downloads 270400 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 391399 The Effects of Labeling Cues on Sensory and Affective Responses of Consumers to Categories of Functional Food Carriers: A Mixed Factorial ANOVA Design
Authors: Hedia El Ourabi, Marc Alexandre Tomiuk, Ahmed Khalil Ben Ayed
Abstract:
The aim of this study is to investigate the effects of the labeling cues traceability (T), health claim (HC), and verification of health claim (VHC) on consumer affective response and sensory appeal toward a wide array of functional food carriers (FFC). Predominantly, research in the food area has tended to examine the effects of these information cues independently on cognitive responses to food product offerings. Investigations and findings of potential interaction effects among these factors on effective response and sensory appeal are therefore scant. Moreover, previous studies have typically emphasized single or limited sets of functional food products and categories. In turn, this study considers five food product categories enriched with omega-3 fatty acids, namely: meat products, eggs, cereal products, dairy products and processed fruits and vegetables. It is, therefore, exhaustive in scope rather than exclusive. An investigation of the potential simultaneous effects of these information cues on the affective responses and sensory appeal of consumers should give rise to important insights to both functional food manufacturers and policymakers. A mixed (2 x 3) x (2 x 5) between-within subjects factorial ANOVA design was implemented in this study. T (two levels: completely traceable or non-traceable) and HC (three levels: functional health claim, or disease risk reduction health claim, or disease prevention health claim) were treated as between-subjects factors whereas VHC (two levels: by a government agency and by a non-government agency) and FFC (five food categories) were modeled as within-subjects factors. Subjects were randomly assigned to one of the six between-subjects conditions. A total of 463 questionnaires were obtained from a convenience sample of undergraduate students at various universities in the Montreal and Ottawa areas (in Canada). Consumer affective response and sensory appeal were respectively measured via the following statements assessed on seven-point semantic differential scales: ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unlikeable (1) / Likeable (7)’ and ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unappetizing (1) / Appetizing (7).’ Results revealed a significant interaction effect between HC and VHC on consumer affective response as well as on sensory appeal toward foods enriched with omega-3 fatty acids. On the other hand, the three-way interaction effect between T, HC, and VHC on either of the two dependent variables was not significant. However, the triple interaction effect among T, VHC, and FFC was significant on consumer effective response and the interaction effect among T, HC, and FFC was significant on consumer sensory appeal. Findings of this study should serve as impetus for functional food manufacturers to closely cooperate with policymakers in order to improve on and legitimize the use of health claims in their marketing efforts through credible verification practices and protocols put in place by trusted government agencies. Finally, both functional food manufacturers and retailers may benefit from the socially-responsible image which is conveyed by product offerings whose ingredients remain traceable from farm to kitchen table.Keywords: functional foods, labeling cues, effective appeal, sensory appeal
Procedia PDF Downloads 164398 Using True Life Situations in a Systems Theory Perspective as Sources of Creativity: A Case Study of how to use Everyday Happenings to produce Creative Outcomes in Novel and Screenplay Writing
Authors: Rune Bjerke
Abstract:
Psychologists incline to see creativity as a mental and psychological process. However, creativity is as well results of cultural and social interactions. Therefore, creativity is not a product of individuals in isolation, but of social systems. Creative people get ideas from the influence of others and the immediate cultural environment – a space of knowledge, situations, and practices. Therefore, in this study we apply the systems theory in practice to activate creativity processes in the production of our novel and screenplay writing. We, as storytellers actively seek to get into situations in our everyday lives, our systems, to generate ideas. Within our personal systems, we have the potential to induce situations to realise ideas to our texts, which may be accepted by our gate-keepers and can become socially validated. This is our method of writing – get into situations, get ideas to texts, and test them with family and friends in our social systems. Example of novel text as an outcome of our method is as follows: “Is it a matter of obviousness or had I read it somewhere, that the one who increases his knowledge increases his pain? And also, the other way around, with increased pain, knowledge increases, I thought. Perhaps such a chain of effects explains why the rebel August Strindberg wrote seven plays in ten months after the divorce with Siri von Essen. Shortly after, he tried painting. Neither the seven theatre plays were shown, nor the paintings were exhibited. I was standing in front of Munch's painting Women in Three Stages with chaotic mental images of myself crumpled in a church and a laughing x-girlfriend watching my suffering. My stomach was turning at unpredictable intervals and the subsequent vomiting almost suffocated me. Love grief at the worst. Was it this pain Strindberg felt? Despite the failure of his first plays, the pain must have triggered a form of creative energy that turned pain into ideas. Suffering, thoughts, feelings, words, text, and then, the reader experience. Maybe this negative force can be transformed into something positive, I asked myself. The question eased my pain. At that moment, I forgot the damp, humid air in the Munch Museum. Is it the similar type of Strindberg-pain that could explain the recurring, depressive themes in Munch's paintings? Illness, death, love and jealousy. As a beginning art student at the master's level, I had decided to find the answer. Was it the same with Munch's pain, as with Strindberg - a woman behind? There had to be women in the case of Munch - therefore, the painting “Women in Three Stages”? Who are they, what personality types are they – the women in red, black and white dresses from left to the right?” We, the writers, are using persons, situations and elements in our systems, in a systems theory perspective, to prompt creative ideas. A conceptual model is provided to advance creativity theory.Keywords: creativity theory, systems theory, novel writing, screenplay writing, sources of creativity in social systems
Procedia PDF Downloads 120397 Fields of Power, Visual Culture, and the Artistic Practice of Two 'Unseen' Women of Central Brazil
Authors: Carolina Brandão Piva
Abstract:
In our visual culture, images play a newly significant role in the basis of a complex dialogue between imagination, creativity, and social practice. Insofar as imagination has broken out of the 'special expressive space of art' to become a part of the quotidian mental work of ordinary people, it is pertinent to recognize that visual representation can no longer be assumed as if in a domain detached from everyday life or exclusively 'centered' within the limited frame of 'art history.' The approach of Visual Culture as a field of study is, in this sense, indispensable to comprehend that not only 'the image,' but also 'the imagined' and 'the imaginary' are produced in the plurality of social interactions; crucial enough, this assertion directs us to something new in contemporary cultural processes, namely both imagination and image production constitute a social practice. This paper starts off with this approach and seeks to examine the artistic practice of two women from the State of Goiás, Brazil, who are ordinary citizens with their daily activities and narratives but also dedicated to visuality production. With no formal training from art schools, branded or otherwise, Maria Aparecida de Souza Pires deploys 'waste disposal' of daily life—from car tires to old work clothes—as a trampoline for art; also adept at sourcing raw materials collected from her surroundings, she manipulates raw hewn wood, tree trunks, plant life, and various other pieces she collects from nature giving them new meaning and possibility. Hilda Freire works with sculptures in clay using different scales and styles; her art focuses on representations of women and pays homage to unprivileged groups such as the practitioners of African-Brazilian religions, blue-collar workers, poor live-in housekeepers, and so forth. Although they have never been acknowledged by any mainstream art institution in Brazil, whose 'criterion of value' still favors formally trained artists, Maria Aparecida de Souza Pires, and Hilda Freire have produced visualities that instigate 'new ways of seeing,' meriting cultural significance in many ways. Their artworks neither descend from a 'traditional' medium nor depend on 'canonical viewing settings' of visual representation; rather, they consist in producing relationships with the world which do not result in 'seeing more,' but 'at least differently.' From this perspective, the paper finally demonstrates that grouping this kind of artistic production under the label of 'mere craft' has much more to do with who is privileged within the fields of power in art system, who we see and who we do not see, and whose imagination of what is fed by which visual images in Brazilian contemporary society.Keywords: visual culture, artistic practice, women's art in the Brazilian State of Goiás, Maria Aparecida de Souza Pires, Hilda Freire
Procedia PDF Downloads 152396 The Temporal Implications of Spatial Prospects
Authors: Zhuo Job Chen, Kevin Nute
Abstract:
The work reported examines potential linkages between spatial and temporal prospects, and more specifically, between variations in the spatial depth and foreground obstruction of window views, and observers’ sense of connection to the future. It was found that external views from indoor spaces were strongly associated with a sense of the future, that partially obstructing such a view with foreground objects significantly reduced its association with the future, and replacing it with a pictorial representation of the same scene (with no real actual depth) removed most of its temporal association. A lesser change in the spatial depth of the view, however, had no apparent effect on association with the future. While the role of spatial depth has still to be confirmed, the results suggest that spatial prospects directly affect temporal ones. The word “prospect” typifies the overlapping of the spatial and temporal in most human languages. It originated in classical times as a purely spatial term, but in the 16th century took on the additional temporal implication of an imagined view ahead, of the future. The psychological notion of prospection, then, has its distant origins in a spatial analogue. While it is not yet proven that space directly structures our processing of time at a physiological level, it is generally agreed that it commonly does so conceptually. The mental representation of possible futures has been a central part of human survival as a species (Boyer, 2008; Suddendorf & Corballis, 2007). A sense of the future seems critical not only practically, but also psychologically. It has been suggested, for example, that lack of a positive image of the future may be an important contributing cause of depression (Beck, 1974; Seligman, 2016). Most people in the developed world now spend more than 90% of their lives indoors. So any direct link between external views and temporal prospects could have important implications for both human well-being and building design. We found that the ability to see what lies in front of us spatially was strongly associated with a sense of what lies ahead temporally. Partial obstruction of a view was found to significantly reduce that sense connection to the future. Replacing a view with a flat pictorial representation of the same scene removed almost all of its connection with the future, but changing the spatial depth of a real view appeared to have no significant effect. While foreground obstructions were found to reduce subjects’ sense of connection to the future, they increased their sense of refuge and security. Consistent with Prospect and Refuge theory, an ideal environment, then, would seem to be one in which we can “see without being seen” (Lorenz, 1952), specifically one that conceals us frontally from others, without restricting our own view. It is suggested that these optimal conditions might be translated architecturally as screens, the apertures of which are large enough for a building occupant to see through unobstructed from close by, but small enough to conceal them from the view of someone looking from a distance outside.Keywords: foreground obstructions, prospection, spatial depth, window views
Procedia PDF Downloads 124395 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments
Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora
Abstract:
Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver
Procedia PDF Downloads 315