Search results for: precedent phenomena
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1159

Search results for: precedent phenomena

49 The Temperature Degradation Process of Siloxane Polymeric Coatings

Authors: Andrzej Szewczak

Abstract:

Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.

Keywords: silicones, siloxanes, surface hardness, temperature, water absorption

Procedia PDF Downloads 242
48 The Real Ambassador: How Hip Hop Culture Connects and Educates across Borders

Authors: Frederick Gooding

Abstract:

This paper explores how many Hip Hop artists have intentionally and strategically invoked sustainability principles of people, planet and profits as a means to create community, compensate for and cope with structural inequalities in society. These themes not only create community within one's country, but the powerful display and demonstration of these narratives create community on a global plane. Listeners of Hip Hop are therefore able to learn about the political events occurring in another country free of censure, and establish solidarity worldwide. Hip Hop therefore can be an ingenious tool to create self-worth, recycle positive imagery, and serve as a defense mechanism from institutional and structural forces that conspire to make an upward economic and social trajectory difficult, if not impossible for many people of color, all across the world. Although the birthplace of Hip Hop, the United States of America, is still predominately White, it has undoubtedly grown more diverse at a breath-­taking pace in recent decades. Yet, whether American mainstream media will fully reflect America’s newfound diversity remains to be seen. As it stands, American mainstream media is seen and enjoyed by diverse audiences not just in America, but all over the world. Thus, it is imperative that further inquiry is conducted about one of the fastest growing genres within one of the world’s largest and most influential media industries generating upwards of $10 billion annually. More importantly, hip hop, its music and associated culture collectively represent a shared social experience of significant value. They are important tools used both to inform and influence economic, social and political identity. Conversely, principles of American exceptionalism often prioritize American political issues over those of others, thereby rendering a myopic political view within the mainstream. This paper will therefore engage in an international contextualization of the global phenomena entitled Hip Hop by exploring the creative genius and marketing appeal of Hip Hop within the global context of information technology, political expression and social change in addition to taking a critical look at historically racialized imagery within mainstream media. Many artists the world over have been able to freely express themselves and connect with broader communities outside of their own borders, all through the sound practice of the craft of Hip Hop. An empirical understanding of political, social and economic forces within the United States will serve as a bridge for identifying and analyzing transnational themes of commonality for typically marginalized or disaffected communities facing similar struggles for survival and respect. The sharing of commonalities of marginalized cultures not only serves as a source of education outside of typically myopic, mainstream sources, but it also creates transnational bonds globally to the extent that practicing artists resonate with many of the original themes of (now mostly underground) Hip Hop as with many of the African American artists responsible for creating and fostering Hip Hop's powerful outlet of expression. Hip Hop's power of connectivity and culture-sharing transnationally across borders provides a key source of education to be taken seriously by academics.

Keywords: culture, education, global, hip hop, mainstream music, transnational

Procedia PDF Downloads 100
47 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 326
46 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 314
45 A Long-Standing Methodology Quest Regarding Commentary of the Qur’an: Modern Debates on Function of Hermeneutics in the Quran Scholarship in Turkey

Authors: Merve Palanci

Abstract:

This paper aims to reveal and analyze methodology debates on Qur’an Commentary in Turkish Scholarship and to make sound inductions on the current situation, with reference to the literature evolving around the credibility of Hermeneutics when the case is Qur’an commentary and methodological connotations related to it, together with the other modern approaches to the Qur’an. It is fair to say that Tafseer, constituting one of the main parts of basic Islamic sciences, has drawn great attention from both Muslim and non-Muslim scholars for a long time. And with the emplacement of an acute junction between natural sciences and social sciences in the post-enlightenment period, this interest seems to pave the way for methodology discussions that are conducted by theology spheres, occupying a noticeable slot in Tafseer literature, as well. A panoramic glance at the classical treatise in relation to the methodology of Tafseer, namely Usul al-Tafseer, leads the reader to the conclusion that these classics are intrinsically aimed at introducing the Qur’an and its early history of formation as a corpus and providing a better understanding of its content. To illustrate, the earliest methodology work extant for Qur’an commentary, al- Aql wa’l Fahm al- Qur’an by Harith al-Muhasibi covers content that deals with Qur’an’s rhetoric, its muhkam and mutashabih, and abrogation, etc. And most of the themes in question are evident to share a common ground: understanding the Scripture and producing an accurate commentary to be built on this preliminary phenomenon of understanding. The content of other renowned works in an overtone of Tafseer methodology, such as Funun al Afnan, al- Iqsir fi Ilm al- Tafseer, and other succeeding ones al- Itqan and al- Burhan is also rich in hints related to preliminary phenomena of understanding. However, these works are not eligible for being classified as full-fledged methodology manuals assuring a true understanding of the Qur’an. And Hermeneutics is believed to supply substantial data applicable to Qur’an commentary as it deals with the nature of understanding itself. Referring to the latest tendencies in Tafseer methodology, this paper envisages to centralize hermeneutical debates in modern scholarship of Qur’an commentary and the incentives that lead scholars to apply for Hermeneutics in Tafseer literature. Inspired from these incentives, the study involves three parts. In the introduction part, this paper introduces key features of classical methodology works in general terms and traces back the main methodological shifts of modern times in Qur’an commentary. To this end, revisionist Ecole, scientific Qur’an commentary ventures, and thematic Qur’an commentary are included and analysed briefly. However, historical-critical commentary on the Quran, as it bears a close relationship with hermeneutics, is handled predominantly. The second part is based on the hermeneutical nature of understanding the Scripture, revealing a timeline for the beginning of hermeneutics debates in Tafseer, and Fazlur Rahman’s(d.1988) influence will be manifested for establishing a theoretical bridge. In the following part, reactions against the application of Hermeneutics in Tafseer activity and pro-hermeneutics works will be revealed through cross-references to the prominent figures of both, and the literature in question in theology scholarship in Turkey will be explored critically.

Keywords: hermeneutics, Tafseer, methodology, Ulum al- Qur’an, modernity

Procedia PDF Downloads 72
44 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 164
43 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 21
42 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development

Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas

Abstract:

One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.

Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development

Procedia PDF Downloads 315
41 Extremism among College and High School Students in Moscow: Diagnostics Features

Authors: Puzanova Zhanna Vasilyevna, Larina Tatiana Igorevna, Tertyshnikova Anastasia Gennadyevna

Abstract:

In this day and age, extremism in various forms of its manifestation is a real threat to the world community, the national security of a state and its territorial integrity, as well as to the constitutional rights and freedoms of citizens. Extremism, as it is known, in general terms described as a commitment to extreme views and actions, radically denying the existing social norms and rules. Supporters of extremism in the ideological and political struggles often adopt methods and means of psychological warfare, appeal not to reason and logical arguments, but to emotions and instincts of the people, to prejudices, biases, and a variety of mythological designs. They are dissatisfied with the established order and aim at increasing this dissatisfaction among the masses. Youth extremism holds a specific place among the existing forms and types of extremism. In this context in 2015, we conducted a survey among Moscow college and high school students. The aim of this study was to determine how great or small is the difference in understanding and attitudes towards extremism manifestations, inclination and readiness to take part in extremist activities and what causes this predisposition, if it exists. We performed multivariate analysis and found the Russian college and high school students' opinion about the extremism and terrorism situation in our country and also their cognition on these topics. Among other things, we showed, that the level of aggressiveness of young people were not above the average for the whole population. The survey was conducted using the questionnaire method. The sample included college and high school students in Moscow (642 and 382, respectively) by method of random selection. The questionnaire was developed by specialists of RUDN University Sociological Laboratory and included both original questions (projective questions, the technique of incomplete sentences), and the standard test Dayhoff S. to determine the level of internal aggressiveness. It is also used as an experiment, the technique of study option using of FACS and SPAFF to determine the psychotypes and determination of non-verbal manifestations of emotions. The study confirmed the hypothesis that in respondents’ opinion, the level of aggression is higher today than a few years ago. Differences were found in the understanding of and respect for such social phenomena as extremism, terrorism, and their danger and appeal for the two age groups of young people. Theory of psychotypes, SPAFF (specific affect cording system) and FACS (facial action cording system) are considered as additional techniques for the diagnosis of a tendency to extreme views. Thus, it is established that diagnostics of acceptance of extreme views among young people is possible thanks to simultaneous use of knowledge from the different fields of socio-humanistic sciences. The results of the research can be used in a comparative context with other countries and as a starting point for further research in the field, taking into account its extreme relevance.

Keywords: extremism, youth extremism, diagnostics of extremist manifestations, forecast of behavior, sociological polls, theory of psychotypes, FACS, SPAFF

Procedia PDF Downloads 337
40 Molecular Migration in Polyvinyl Acetate Matrix: Impact of Compatibility, Number of Migrants and Stress on Surface and Internal Microstructure

Authors: O. Squillace, R. L. Thompson

Abstract:

Migration of small molecules to, and across the surface of polymer matrices is a little-studied problem with important industrial applications. Tackifiers in adhesives, flavors in foods and binding agents in paints all present situations where the function of a product depends on the ability of small molecules to migrate through a polymer matrix to achieve the desired properties such as softness, dispersion of fillers, and to deliver an effect that is felt (or tasted) on a surface. It’s been shown that the chemical and molecular structure, surface free energies, phase behavior, close environment and compatibility of the system, influence the migrants’ motion. When differences in behavior, such as occurrence of segregation to the surface or not, are observed it is then of crucial importance to identify and get a better understanding of the driving forces involved in the process of molecular migration. In this aim, experience is meant to be allied with theory in order to deliver a validated theoretical and computational toolkit to describe and predict these phenomena. The systems that have been chosen for this study aim to address the effect of polarity mismatch between the migrants and the polymer matrix and that of a second migrant over the first one. As a non-polar resin polymer, polyvinyl acetate is used as the material to which more or less polar migrants (sorbitol, carvone, octanoic acid (OA), triacetin) are to be added. Through contact angle measurement a surface excess is seen for sorbitol (polar) mixed with PVAc as the surface energy is lowered compare to the one of pure PVAc. This effect is increased upon the addition of carvon or triacetin (non-polars). Surface micro-structures are also evidenced by atomic force microscopy (AFM). Ion beam analysis (Nuclear Reaction Analysis), supplemented by neutron reflectometry can accurately characterize the self-organization of surfactants, oligomers, aromatic molecules in polymer films in order to relate the macroscopic behavior to the length scales that are amenable to simulation. The nuclear reaction analysis (NRA) data for deuterated OA 20% shows the evidence of a surface excess which is enhanced after annealing. The addition of 10% triacetin, as a second migrant, results in the formation of an underlying layer enriched in triacetin below the surface excess of OA. The results show that molecules in polarity mismatch with the matrix tend to segregate to the surface, and this is favored by the addition of a second migrant of the same polarity than the matrix. As studies have been restricted to materials that are model supported films under static conditions in a first step, it is also wished to address the more challenging conditions of materials under controlled stress or strain. To achieve this, a simple rig and PDMS cell have been designed to stretch the material to a defined strain and to probe these mechanical effects by ion beam analysis and atomic force microscopy. This will make a significant step towards exploring the influence of extensional strain on surface segregation, flavor release in cross-linked rubbers.

Keywords: polymers, surface segregation, thin films, molecular migration

Procedia PDF Downloads 132
39 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 117
38 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada

Authors: Stefan W. Kienzle

Abstract:

The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.

Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes

Procedia PDF Downloads 92
37 Theoretical Study of the Photophysical Properties and Potential Use of Pseudo-Hemi-Indigo Derivatives as Molecular Logic Gates

Authors: Christina Eleftheria Tzeliou, Demeter Tzeli

Abstract:

Introduction: Molecular Logic Gates (MLGs) are molecular machines that can perform complex work, such as solving logic operations. Molecular switches, which are molecules that can experience chemical changes are examples of successful types of MLGs. Recently, Quintana-Romero and Ariza-Castolo studied experimentally six stable pseudo-hemi-indigo-derived MLGs capable of solving complex logic operations. The MLG design relies on a molecular switch that experiences Z and E isomerism, thus the molecular switch's axis has to be a double bond. The hemi-indigo structure was preferred for the assembly of molecular switches due to its interaction with visible light. Z and E pseudo-hemi-indigo isomers can also be utilized for selective isomerization as they have distinct absorption spectra. Methodology: Here, the photophysical properties of pseudo-hemi-indigo derivatives are examined, i.e., derivatives of molecule 1 with anthracene, naphthalene, phenanthrene, pyrene, and pyrrole. In conjunction with some trials that were conducted, the level of theory mentioned subsequently was determined. The structures under study were optimized in both cis and trans conformations at the PBE0/6-31G(d,p) level of theory. The absorption spectra of the structures were calculated at PBE0/DEF2TZVP. In all cases, the absorption spectra of the studied systems were calculated including up to 50 singlet- and triplet-spin excited electronic states. Transition states (cis → cis, cis → trans, and trans → trans) were obtained in cases where it was possible, with PBE0/6-31G(d,p) for the optimization of the transition states and PBE0/DEF2TZVP for the respective absorption spectra. Emission spectra were obtained for the first singlet state of each molecule in cis both and trans conformations in PBE0/DEF2TZVP as well. All studies were performed in chloroform solvent that was added as a dielectric constant and the polarizable continuum model was also employed. Findings: Shifts of up to 25 nm are observed in the absorption spectra due to cis-trans isomerization, while the transition state is shifted up to about 150 nm. The electron density distribution is also examined, where charge transfer and electron transfer phenomena are observed regarding the three excitations of interest, i.e., H-1 → L, H → L and H → L+1. Emission spectra calculations were also carried out at PBE0/DEF2TZVP for the complete investigation of these molecules. Using protonation as input, selected molecules act as MLGs. Conclusion: Theoretical data so far indicate that both cis-trans isomerization, and cis-cis and trans-trans conformer isomerization affect the UV-visible absorption and emission spectra. Specifically, shifts of up to 30 nm are observed, while the transition state is shifted up to about 150 nm in cis-cis isomerization. The computational data obtained are in agreement with available experimental data, which have predicted that the pyrrole derivative is a MLG at 445 nm and 400 nm using protonation as input, while the anthracene derivative is a MLG that operates at 445 nm using protonation as input. Finally, it was found that selected molecules are candidates as MLG using protonation and light as inputs. These MLGs could be used as chemical sensors or as particular intracellular indicators, among several other applications. Acknowledgements: The author acknowledges the Hellenic Foundation for Research and Innovation for the financial support of this project (Fellowship Number: 21006).

Keywords: absorption spectra, DFT calculations, isomerization, molecular logic gates

Procedia PDF Downloads 19
36 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 175
35 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo

Authors: Shpetim Rezniqi

Abstract:

Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.

Keywords: globalisation, finance, crisis, recomandation, bank, credits

Procedia PDF Downloads 389
34 Managing Inter-Organizational Innovation Project: Systematic Review of Literature

Authors: Lamin B Ceesay, Cecilia Rossignoli

Abstract:

Inter-organizational collaboration is a growing phenomenon in both research and practice. The partnership between organizations enables firms to leverage external resources, experiences, and technology that lie with other firms. This collaborative practice is a source of improved business model performance, technological advancement, and increased competitive advantage for firms. However, the competitive intents, and even diverse institutional logics of firms, make inter-firm innovation-based partnership even more complex, and its governance more challenging. The purpose of this paper is to present a systematic review of research linking the inter-organizational relationship of firms with their innovation practice and specify the different project management issues and gaps addressed in previous research. To do this, we employed a systematic review of the literature on inter-organizational innovation using two complementary scholarly databases - ScienceDirect and Web of Science (WoS). Article scoping relies on the combination of keywords based on similar terms used in the literature:(1) inter-organizational relationship, (2) business network, (3) inter-firm project, and (4) innovation network. These searches were conducted in the title, abstract, and keywords of conceptual and empirical research papers done in English. Our search covers between 2010 to 2019. We applied several exclusion criteria including Papers published outside the years under the review, papers in a language other than English, papers neither listed in WoS nor ScienceDirect and papers that are not sharply related to the inter-organizational innovation-based partnership were removed. After all relevant search criteria were applied, a final list of 84 papers constitutes the data for this review. Our review revealed an increasing evolution of inter-organizational relationship research during the period under the review. The descriptive analysis of papers according to Journal outlets finds that International Journal of Project Management (IJPM), Journal of Industrial Marketing, Journal of Business Research (JBR), etc. are the leading journal outlets for research in the inter-organizational innovation project. The review also finds that Qualitative methods and quantitative approaches respectively are the leading research methods adopted by scholars in the field. However, literature review and conceptual papers constitute the least in the field. During the content analysis of the selected papers, we read the content of each paper and found that the selected papers try to address one of the three phenomena in inter-organizational innovation research: (1) project antecedents; (2) project management and (3) project performance outcomes. We found that these categories are not mutually exclusive, but rather interdependent. This categorization also helped us to organize the fragmented literature in the field. While a significant percentage of the literature discussed project management issues, we found fewer extant literature on project antecedents and performance. As a result of this, we organized the future research agenda addressed in several papers by linking them with the under-researched themes in the field, thus providing great potential to advance future research agenda especially, in the under-researched themes in the field. Finally, our paper reveals that research on inter-organizational innovation project is generally fragmented which hinders a better understanding of the field. Thus, this paper contributes to the understanding of the field by organizing and discussing the extant literature to advance the theory and application of inter-organizational relationship.

Keywords: inter-organizational relationship, inter-firm collaboration, innovation projects, project management, systematic review

Procedia PDF Downloads 110
33 Electrochemical Activity of NiCo-GDC Cermet Anode for Solid Oxide Fuel Cells Operated in Methane

Authors: Kamolvara Sirisuksakulchai, Soamwadee Chaianansutcharit, Kazunori Sato

Abstract:

Solid Oxide Fuel Cells (SOFCs) have been considered as one of the most efficient large unit power generators for household and industrial applications. The efficiency of an electronic cell depends mainly on the electrochemical reactions in the anode. The development of anode materials has been intensely studied to achieve higher kinetic rates of redox reactions and lower internal resistance. Recent studies have introduced an efficient cermet (ceramic-metallic) material for its ability in fuel oxidation and oxide conduction. This could expand the reactive site, also known as the triple-phase boundary (TPB), thus increasing the overall performance. In this study, a bimetallic catalyst Ni₀.₇₅Co₀.₂₅Oₓ was combined with Gd₀.₁Ce₀.₉O₁.₉₅ (GDC) to be used as a cermet anode (NiCo-GDC) for an anode-supported type SOFC. The synthesis of Ni₀.₇₅Co₀.₂₅Oₓ was carried out by ball milling NiO and Co3O4 powders in ethanol and calcined at 1000 °C. The Gd₀.₁Ce₀.₉O₁.₉₅ was prepared by a urea co-precipitation method. Precursors of Gd(NO₃)₃·6H₂O and Ce(NO₃)₃·6H₂O were dissolved in distilled water with the addition of urea and were heated subsequently. The heated mixture product was filtered and rinsed thoroughly, then dried and calcined at 800 °C and 1500 °C, respectively. The two powders were combined followed by pelletization and sintering at 1100 °C to form an anode support layer. The fabrications of an electrolyte layer and cathode layer were conducted. The electrochemical performance in H₂ was measured from 800 °C to 600 °C while for CH₄ was from 750 °C to 600 °C. The maximum power density at 750 °C in H₂ was 13% higher than in CH₄. The difference in performance was due to higher polarization resistances confirmed by the impedance spectra. According to the standard enthalpy, the dissociation energy of C-H bonds in CH₄ is slightly higher than the H-H bond H₂. The dissociation of CH₄ could be the cause of resistance within the anode material. The results from lower temperatures showed a descending trend of power density in relevance to the increased polarization resistance. This was due to lowering conductivity when the temperature decreases. The long-term stability was measured at 750 °C in CH₄ monitoring at 12-hour intervals. The maximum power density tends to increase gradually with time while the resistances were maintained. This suggests the enhanced stability from charge transfer activities in doped ceria due to the transition of Ce⁴⁺ ↔ Ce³⁺ at low oxygen partial pressure and high-temperature atmosphere. However, the power density started to drop after 60 h, and the cell potential also dropped from 0.3249 V to 0.2850 V. These phenomena was confirmed by a shifted impedance spectra indicating a higher ohmic resistance. The observation by FESEM and EDX-mapping suggests the degradation due to mass transport of ions in the electrolyte while the anode microstructure was still maintained. In summary, the electrochemical test and stability test for 60 h was achieved by NiCo-GDC cermet anode. Coke deposition was not detected after operation in CH₄, hence this confirms the superior properties of the bimetallic cermet anode over typical Ni-GDC.

Keywords: bimetallic catalyst, ceria-based SOFCs, methane oxidation, solid oxide fuel cell

Procedia PDF Downloads 152
32 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System

Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski

Abstract:

Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.

Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals

Procedia PDF Downloads 374
31 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 187
30 Experimental Study of the Behavior of Elongated Non-spherical Particles in Wall-Bounded Turbulent Flows

Authors: Manuel Alejandro Taborda Ceballos, Martin Sommerfeld

Abstract:

Transport phenomena and dispersion of non-spherical particle in turbulent flows are found everywhere in industrial application and processes. Powder handling, pollution control, pneumatic transport, particle separation are just some examples where the particle encountered are not only spherical. These types of multiphase flows are wall bounded and mostly highly turbulent. The particles found in these processes are rarely spherical but may have various shapes (e.g., fibers, and rods). Although research related to the behavior of regular non-spherical particles in turbulent flows has been carried out for many years, it is still necessary to refine models, especially near walls where the interaction fiber-wall changes completely its behavior. Imaging-based experimental studies on dispersed particle-laden flows have been applied for many decades for a detailed experimental analysis. These techniques have the advantages that they provide field information in two or three dimensions, but have a lower temporal resolution compared to point-wise techniques such as PDA (phase-Doppler anemometry) and derivations therefrom. The applied imaging techniques in dispersed two-phase flows are extensions from classical PIV (particle image velocimetry) and PTV (particle tracking velocimetry) and the main emphasis was simultaneous measurement of the velocity fields of both phases. In a similar way, such data should also provide adequate information for validating the proposed models. Available experimental studies on the behavior of non-spherical particles are uncommon and mostly based on planar light-sheet measurements. Especially for elongated non-spherical particles, however, three-dimensional measurements are needed to fully describe their motion and to provide sufficient information for validation of numerical computations. For further providing detailed experimental results allowing a validation of numerical calculations of non-spherical particle dispersion in turbulent flows, a water channel test facility was built around a horizontal closed water channel. Into this horizontal main flow, a small cross-jet laden with fiber-like particles was injected, which was also solely driven by gravity. The dispersion of the fibers was measured by applying imaging techniques based on a LED array for backlighting and high-speed cameras. For obtaining the fluid velocity fields, almost neutrally buoyant tracer was used. The discrimination between tracer and fibers was done based on image size which was also the basis to determine fiber orientation with respect to the inertial coordinate system. The synchronous measurement of fluid velocity and fiber properties also allow the collection of statistics of fiber orientation, velocity fields of tracer and fibers, the angular velocity of the fibers and the orientation between fiber and instantaneous relative velocity. Consequently, an experimental study the behavior of elongated non-spherical particles in wall bounded turbulent flows was achieved. The development of a comprehensive analysis was succeeded, especially near the wall region, where exists hydrodynamic wall interaction effects (e.g., collision or lubrication) and abrupt changes of particle rotational velocity. This allowed us to predict numerically afterwards the behavior of non-spherical particles within the frame of the Euler/Lagrange approach, where the particles are therein treated as “point-particles”.

Keywords: crossflow, non-spherical particles, particle tracking velocimetry, PIV

Procedia PDF Downloads 85
29 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 92
28 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 261
27 Urban Heat Islands Analysis of Matera, Italy Based on the Change of Land Cover Using Satellite Landsat Images from 2000 to 2017

Authors: Giuseppina Anna Giorgio, Angela Lorusso, Maria Ragosta, Vito Telesca

Abstract:

Climate change is a major public health threat due to the effects of extreme weather events on human health and on quality of life in general. In this context, mean temperatures are increasing, in particular, extreme temperatures, with heat waves becoming more frequent, more intense, and longer lasting. In many cities, extreme heat waves have drastically increased, giving rise to so-called Urban Heat Island (UHI) phenomenon. In an urban centre, maximum temperatures may be up to 10° C warmer, due to different local atmospheric conditions. UHI occurs in the metropolitan areas as function of the population size and density of a city. It consists of a significant difference in temperature compared to the rural/suburban areas. Increasing industrialization and urbanization have increased this phenomenon and it has recently also been detected in small cities. Weather conditions and land use are one of the key parameters in the formation of UHI. In particular surface urban heat island is directly related to temperatures, to land surface types and surface modifications. The present study concern a UHI analysis of Matera city (Italy) based on the analysis of temperature, change in land use and land cover, using Corine Land Cover maps and satellite Landsat images. Matera, located in Southern Italy, has a typical Mediterranean climate with mild winters and hot and humid summers. Moreover, Matera has been awarded the international title of the 2019 European Capital of Culture. Matera represents a significant example of vernacular architecture. The structure of the city is articulated by a vertical succession of dug layers sometimes excavated or partly excavated and partly built, according to the original shape and height of the calcarenitic slope. In this study, two meteorological stations were selected: MTA (MaTera Alsia, in industrial zone) and MTCP (MaTera Civil Protection, suburban area located in a green zone). In order to evaluate the increase in temperatures (in terms of UHI occurrences) over time, and evaluating the effect of land use on weather conditions, the climate variability of temperatures for both stations was explored. Results show that UHI phenomena is growing in Matera city, with an increase of maximum temperature values at a local scale. Subsequently, spatial analysis was conducted by Landsat satellite images. Four years was selected in the summer period (27/08/2000, 27/07/2006, 11/07/2012, 02/08/2017). In Particular, Landsat 7 ETM+ for 2000, 2006 and 2012 years; Landsat 8 OLI/TIRS for 2017. In order to estimate the LST, Mono Window Algorithm was applied. Therefore, the increase of LST values spatial scale trend has been verified, in according to results obtained at local scale. Finally, the analysis of land use maps over the years by the LST and/or the maximum temperatures measured, show that the development of industrialized area produces a corresponding increase in temperatures and consequently a growth in UHI.

Keywords: climate variability, land surface temperature, LANDSAT images, urban heat island

Procedia PDF Downloads 124
26 Superhydrophobic Materials: A Promising Way to Enhance Resilience of Electric System

Authors: M. Balordi, G. Santucci de Magistris, F. Pini, P. Marcacci

Abstract:

The increasing of extreme meteorological events represents the most important causes of damages and blackouts of the whole electric system. In particular, the icing on ground-wires and overheads lines, due to snowstorms or harsh winter conditions, very often gives rise to the collapse of cables and towers both in cold and warm climates. On the other hand, the high concentration of contaminants in the air, due to natural and/or antropic causes, is reflected in high levels of pollutants layered on glass and ceramic insulators, causing frequent and unpredictable flashover events. Overheads line and insulator failures lead to blackouts, dangerous and expensive maintenances and serious inefficiencies in the distribution service. Inducing superhydrophobic (SHP) properties to conductors, ground-wires and insulators, is one of the ways to face all these problems. Indeed, in some cases, the SHP surface can delay the ice nucleation time and decrease the ice nucleation temperature, preventing ice formation. Besides, thanks to the low surface energy, the adhesion force between ice and a superhydrophobic material are low and the ice can be easily detached from the surface. Moreover, it is well known that superhydrophobic surfaces can have self-cleaning properties: these hinder the deposition of pollution and decrease the probability of flashover phenomena. Here this study presents three different studies to impart superhydrophobicity to aluminum, zinc and glass specimens, which represent the main constituent materials of conductors, ground-wires and insulators, respectively. The route to impart the superhydrophobicity to the metallic surfaces can be summarized in a three-step process: 1) sandblasting treatment, 2) chemical-hydrothermal treatment and 3) coating deposition. The first step is required to create a micro-roughness. In the chemical-hydrothermal treatment a nano-scale metallic oxide (Al or Zn) is grown and, together with the sandblasting treatment, bring about a hierarchical micro-nano structure. By coating an alchilated or fluorinated siloxane coating, the surface energy decreases and gives rise to superhydrophobic surfaces. In order to functionalize the glass, different superhydrophobic powders, obtained by a sol-gel synthesis, were prepared. Further, the specimens were covered with a commercial primer and the powders were deposed on them. All the resulting metallic and glass surfaces showed a noticeable superhydrophobic behavior with a very high water contact angles (>150°) and a very low roll-off angles (<5°). The three optimized processes are fast, cheap and safe, and can be easily replicated on industrial scales. The anti-icing and self-cleaning properties of the surfaces were assessed with several indoor lab-tests that evidenced remarkable anti-icing properties and self-cleaning behavior with respect to the bare materials. Finally, to evaluate the anti-snow properties of the samples, some SHP specimens were exposed under real snow-fall events in the RSE outdoor test-facility located in Vinadio, western Alps: the coated samples delay the formation of the snow-sleeves and facilitate the detachment of the snow. The good results for both indoor and outdoor tests make these materials promising for further development in large scale applications.

Keywords: superhydrophobic coatings, anti-icing, self-cleaning, anti-snow, overheads lines

Procedia PDF Downloads 182
25 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 275
24 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS

Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert

Abstract:

The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.

Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF

Procedia PDF Downloads 158
23 Challenges, Responses and Governance in the Conservation of Forest and Wildlife: The Case of the Aravali Ranges, Delhi NCR

Authors: Shashi Mehta, Krishan Kumar Yadav

Abstract:

This paper presents an overview of issues pertaining to the conservation of the natural environment and factors affecting the coexistence of the forest, wildlife and people. As forests and wildlife together create the basis for economic, cultural and recreational spaces for overall well-being and life-support systems, the adverse impacts of increasing consumerism are only too evident. The IUCN predicts extinction of 41% of all amphibians and 26% of mammals. The major causes behind this threatened extinction are Deforestation, Dysfunctional governance, Climate Change, Pollution and Cataclysmic phenomena. Thus the intrinsic relationship between natural resources and wildlife needs to be understood in totality, not only for the eco-system but for humanity at large. To demonstrate this, forest areas in the Aravalis- the oldest mountain ranges of Asia—falling in the States of Haryana and Rajasthan, have been taken up for study. The Aravalis are characterized by extreme climatic conditions and dry deciduous forest cover on intermittent scattered hills. Extending across the districts of Gurgaon, Faridabad, Mewat, Mahendergarh, Rewari and Bhiwani, these ranges - with village common land on which the entire economy of the rural settlements depends - fall in the state of Haryana. Aravali ranges with diverse fauna and flora near Alwar town of state of Rajasthan also form part of NCR. Once, rich in biodiversity, the Aravalis played an important role in the sustainable co-existence of forest and people. However, with the advent of industrialization and unregulated urbanization, these ranges are facing deforestation, degradation and denudation. The causes are twofold, i.e. the need of the poor and the greed of the rich. People living in and around the Aravalis are mainly poor and eke out a living by rearing live-stock. With shrinking commons, they depend entirely upon these hills for grazing, fuel, NTFP, medicinal plants and even drinking water. But at the same time, the pressure of indiscriminate urbanization and industrialization in these hills fulfils the demands of the rich and powerful in collusion with Government agencies. The functionaries of federal and State Governments play largely a negative role supporting commercial interests. Additionally, planting of a non- indigenous species like prosopis juliflora across the ranges has resulted in the extinction of almost all the indigenous species. The wildlife in the area is also threatened because of the lack of safe corridors and suitable habitat. In this scenario, the participatory role of different stakeholders such as NGOs, civil society and local community in the management of forests becomes crucial not only for conservation but also for the economic wellbeing of the local people. Exclusion of villagers from protection and conservation efforts - be it designing, implementing or monitoring and evaluating could prove counterproductive. A strategy needs to be evolved, wherein Government agencies be made responsible by putting relevant legislation in place along with nurturing and promoting the traditional wisdom and ethics of local communities in the protection and conservation of forests and wild life in the Aravali ranges of States of Haryana and Rajasthan of the National Capital Region, Delhi.

Keywords: deforestation, ecosystem, governance, urbanization

Procedia PDF Downloads 325
22 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia

Authors: Andrew D. Henshaw

Abstract:

The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.

Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism

Procedia PDF Downloads 312
21 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 207
20 Single Crystal Growth in Floating-Zone Method and Properties of Spin Ladders: Quantum Magnets

Authors: Rabindranath Bag, Surjeet Singh

Abstract:

Materials in which the electrons are strongly correlated provide some of the most challenging and exciting problems in condensed matter physics today. After the discovery of high critical temperature superconductivity in layered or two-dimensional copper oxides, many physicists got attention in cuprates and it led to an upsurge of interest in the synthesis and physical properties of copper-oxide based material. The quest to understand superconducting mechanism in high-temperature cuprates, drew physicist’s attention to somewhat simpler compounds consisting of spin-chains or one-dimensional lattice of coupled spins. Low-dimensional quantum magnets are of huge contemporary interest in basic sciences as well emerging technologies such as quantum computing and quantum information theory, and heat management in microelectronic devices. Spin ladder is an example of quasi one-dimensional quantum magnets which provides a bridge between one and two dimensional materials. One of the examples of quasi one-dimensional spin-ladder compounds is Sr14Cu24O41, which exhibits a lot of interesting and exciting physical phenomena in low dimensional systems. Very recently, the ladder compound Sr14Cu24O41 was shown to exhibit long-distance quantum entanglement crucial to quantum information theory. Also, it is well known that hole-compensation in this material results in very high (metal-like) anisotropic thermal conductivity at room temperature. These observations suggest that Sr14Cu24O41 is a potential multifunctional material which invites further detailed investigations. To investigate these properties one must needs a large and high quality of single crystal. But these systems are showing incongruently melting behavior, which brings many difficulties to grow a large and quality of single crystals. Hence, we are using TSFZ (Travelling Solvent Floating Zone) method to grow the high quality of single crystals of the low dimensional magnets. Apart from this, it has unique crystal structure (alternating stacks of plane containing edge-sharing CuO2 chains, and the plane containing two-leg Cu2O3 ladder with intermediate Sr layers along the b- axis), which is also incommensurate in nature. It exhibits abundant physical phenomenon such as spin dimerization, crystallization of charge holes and charge density wave. The maximum focus of research so far involved in introducing defects on A-site (Sr). However, apart from the A-site (Sr) doping, there are only few studies in which the B-site (Cu) doping of polycrystalline Sr14Cu24O41 have been discussed and the reason behind this is the possibility of two doping sites for Cu (CuO2 chain and Cu2O3 ladder). Therefore, in our present work, the crystals (pristine and Cu-site doped) were grown by using TSFZ method by tuning the growth parameters. The Laue diffraction images, optical polarized microscopy and Scanning Electron Microscopy (SEM) images confirm the quality of the grown crystals. Here, we report the single crystal growth, magnetic and transport properties of Sr14Cu24O41 and its lightly doped variants (magnetic and non-magnetic) containing less than 1% of Co, Ni, Al and Zn impurities. Since, any real system will have some amount of weak disorder, our studies on these ladder compounds with controlled dilute disorder would be significant in the present context.

Keywords: low-dimensional quantum magnets, single crystal, spin-ladder, TSFZ technique

Procedia PDF Downloads 273