Search results for: socio-pathological phenomena
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1121

Search results for: socio-pathological phenomena

71 Evaluation of Magnificent Event of India with Special Reference to Maha Kumbha Mela (Fair) 2013-A Congregation of Millions

Authors: Sharad Kumar Kulshreshtha

Abstract:

India is a great land of cultural and traditional diversity. Its spectrums create a unique ambiance in all over the country. Specially, fairs and festivals are ancient phenomena in Indian culture. In India, there are thousands of such religious, spiritual, cultural fairs organized on auspicious occasions. These fairs reflect the effective and efficient role of social governance and responsibility of Indian society. In this context a mega event known as ‘Kumbha Mela’ literally mean ‘Kumbha Fair’ which is organize after every twelve years at (Prayaag) Allahabad an ancient city of India, now is in the state of Uttar Pradesh. Kumbh Mela is one of the largest human congregations on the Earth. The Kumbha Mela that is held here is considered to be the largest and holiest city among the four cities where Kubha fair organize. According to the Hindu religious scripture a dip for possessing the holy confluence, known as Triveni Sangam, which is a meeting point of the three sacred rivers of India i.e., –Ganges, Yamuna and Saraswati (mythical). During the Kumbha fair the River Ganges is believed to turn to nectar, bringing great blessing to everyone who bathes in it. Other activities include religious discussions, devotional singings and mass feedings pilgrims and poor. The venue for Kumbh Mela (fair) depends on the position Sun, Moon, and Jupiter which holds in that period in different zodiac signs. More than 120 Millions (12 Crore) people visited in the Kumbha Fair-2013 in Allahabad. A temporary tented city was set up for the pilgrims over an area of 2 hectares of the land along the river of Ganges. As many as 5 power substations, temporary police stations, hospitals, bus terminals, stalls were set up for providing various facilities to the visitors and thousands of volunteers participated for assistance of this event. All efforts made by fair administration to provide facility to visitors, such security and sanitation, medical care and frequent water and power supply. The efficient and timely arrangement at the Kumbha Mela attracted the attention of many government and institutions. The Harvard University of USA conducted research to find out how it was made possible. This paper will focuses on effective and efficient planning and preparation of Kumbha Fair which includes facilitation process, role of various coordinating agencies. risk management crisis management strategies Prevention, Preparedness, Response, and Recovery (PPRR Approach), emergency response plan (ERP), safety and security issues, various environmental aspects along with health hazards and hygiene crowd management, evacuation, monitoring, control and evaluation.

Keywords: event planning and facility arrangement, risk management, crowd management, India

Procedia PDF Downloads 304
70 Implementation of Language Policy in a Swedish Multicultural Early Childhood School: A Development Project

Authors: Carina Hermansson

Abstract:

This presentation focuses a development project aiming at developing and documenting the steps taken at a multilingual, multicultural K-5 school, with the aim to improve the achievement levels of the pupils by focusing language and literacy development across the schedule in a digital classroom, and in all units of the school. This pre-formulated aim, thus, may be said to adhere to neoliberal educational and accountability policies in terms of its focus on digital learning, learning results, and national curriculum standards. In particular the project aimed at improving the collaboration between the teachers, the leisure time unit, the librarians, the mother tongue teachers and bilingual study counselors. This is a school environment characterized by cultural, ethnic, linguistic, and professional pluralization. The overarching aims of the research project were to scrutinize and analyze the factors enabling and obstructing the implementation of the Language Policy in a digital classroom. Theoretical framework: We apply multi-level perspectives in the analyses inspired by Uljens’ ideas about interactive and interpersonal first order (teacher/students) and second order(principal/teachers and other staff) educational leadership as described within the framework of discursive institutionalism, when we try to relate the Language Policy, educational policy, and curriculum with the administrative processes. Methodology/research design: The development project is based on recurring research circles where teachers, leisure time assistants, mother tongue teachers and study counselors speaking the mother tongue of the pupils together with two researchers discuss their digital literacy practices in the classroom. The researchers have in collaboration with the principal developed guidelines for the work, expressed in a Language Policy document. In our understanding the document is, however, only a part of the concept, the actions of the personnel and their reflections on the practice constitute the major part of the development project. One and a half years out of three years have now passed and the project has met with a row of difficulties which shed light on factors of importance for the progress of the development project. Field notes and recordings from the research circles, a survey with the personnel, and recorded group interviews provide data on the progress of the project. Expected conclusions: The problems experienced deal with leadership, curriculum, interplay between aims, technology, contents and methods, the parents as customers taking their children to other schools, conflicting values, and interactional difficulties, that is, phenomena on different levels, ranging from school to a societal level, as for example teachers being substituted as a result of the marketization of schools. Also underlying assumptions from actors at different levels create obstacles. We find this study and the problems we are facing utterly important to share and discuss in an era with a steady flow of refugees arriving in the Nordic countries.

Keywords: early childhood education, language policy, multicultural school, school development project

Procedia PDF Downloads 143
69 Radiation Stability of Structural Steel in the Presence of Hydrogen

Authors: E. A. Krasikov

Abstract:

As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.

Keywords: hydrogen, radiation, stability, structural steel

Procedia PDF Downloads 268
68 Study of Nucleation and Growth Processes of Ettringite in Supersaturated Diluted Solutions

Authors: E. Poupelloz, S. Gauffinet

Abstract:

Ettringite Ca₆Al₂(SO₄)₃(OH)₁₂26H₂O is one of the major hydrates formed during cement hydration. Ettringite forms in Portland cement from the reaction between tricalcium aluminate Ca₃Al₂O₆ and calcium sulfate. Ettringite is also present in calcium sulfoaluminate cement in which it is the major hydrate, formed by the reaction between yeelimite Ca₄(AlO₂)₆SO₄ and calcium sulfate. About the formation of ettringite, numerous results are available in the literature even if some issues are still under discussion. However, almost all published work about ettringite was done on cementitious systems. Yet in cement, hydration reactions are very complex, the result of dissolution-precipitation processes and are submitted to various interactions. Understanding the formation process of a phase alone, here ettringite, is the first step to later understand the much more complex reactions happening in cement. This study is crucial for the comprehension of early cement hydration and physical behavior. Indeed formation of hydrates, in particular, ettringite, will have an influence on the rheological properties of the cement paste and on the need for admixtures. To make progress toward the understanding of existing phenomena, a specific study of nucleation and growth processes of ettringite was conducted. First ettringite nucleation was studied in ionic aqueous solutions, with controlled but different experimental conditions, as different supersaturation degrees (β), different pH or presence of exogenous ions. Through induction time measurements, interfacial ettringite crystals solution energies (γ) were determined. Growth of ettringite in supersaturated solutions was also studied through chain crystallization reactions. Specific BET surface area measurements and Scanning Electron Microscopy observations seemed to prove that growth process is favored over the nucleation process when ettringite crystals are initially present in a solution with a low supersaturation degree. The influence of stirring on ettringite formation was also investigated. Observation was made that intensity and nature of stirring have a high influence on the size of ettringite needles formed. Needle sizes vary from less than 10µm long depending on the stirring to almost 100µm long without any stirring. During all previously mentioned experiments, initially present ions are consumed to form ettringite in such a way that the supersaturation degree with regard to ettringite is decreasing over time. To avoid this phenomenon a device compensating the drop of ion concentrations by adding some more solutions, and therefore always have constant ionic concentrations, was used. This constant β recreates the conditions of the beginning of cement paste hydration, when the dissolution of solid reagents compensates the consumption of ions to form hydrates. This device allowed the determination of the ettringite precipitation rate as a function of the supersaturation degree β. Taking samples at different time during ettringite precipitation and doing BET measurements allowed the determination of the interfacial growth rate of ettringite in m²/s. This work will lead to a better understanding and control of ettringite formation alone and thus during cements hydration. This study will also ultimately define the impact of ettringite formation process on the rheology of cement pastes at early age, which is a crucial parameter from a practical point of view.

Keywords: cement hydration, ettringite, morphology of crystals, nucleation-growth process

Procedia PDF Downloads 126
67 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust

Authors: A. V. Harutyunyan

Abstract:

The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).

Keywords: dehydration, diamonds, hydrocarbons, serpentinites

Procedia PDF Downloads 339
66 Requirements for the Development of Competencies to Mentor Trainee Teachers: A Case Study of Vocational Education Cooperating Teachers in Quebec

Authors: Nathalie Gagnon, Andréanne Gagné, Julie Courcy

Abstract:

Quebec's vocational education teachers experience an atypical induction process into the workplace and thus face unique challenges. In contrast to elementary and high school teachers, who must undergo initial teacher training in order to access the profession, vocational education teachers, in most cases, are hired based on their professional expertise in the trade they are teaching, without prior pedagogical training. In addition to creating significant stress, which does not foster the acquisition of teaching roles and skills, this approach also forces recruits into a particular posture during their practical training: that of juggling their dual identities as teacher and trainee simultaneously. Recruits are supported by Cooperating Teachers (CPs) who, as experienced educators, take a critical and constructive look at their practices, observe them in the classroom, give them constructive feedback, and encourage them in their reflective practice. Thus, the vocational setting CP also assumes a distinctive posture and role due to the characteristics of the trainees they support. Although it is recognized that preparation, training, and supervision of CPs are essential factors in improving the support provided to trainees, there is little research about how CPs develop their support skills, and very little research focuses on the distinct posture they occupy. However, in order for them to be properly equipped for the important role they play in recruits’ practical training, it is vital to know more about their experience. An individual’s competencies cannot be studied without first examining what characterizes their experience, how they experience any given situation on cognitive, emotional, and motivational levels, in addition to how they act and react in situ. Depending on its nature, the experience will or will not promote the development of a specific competency. The research from which this communication originates focuses on describing the overall experience of vocational education CP in an effort to better understand the mechanisms linked to the development of their mentoring competencies. Experience and competence were, therefore, the two main theoretical concepts leading the research. As per methodology choices, case study methods were used since it proves to be adequate to describe in a rich and detailed way contemporary phenomena within contexts of life. The set of data used was collected from semi-structured interviews conducted with 15 vocational education CP in Quebec (Canada), followed by the use of a data-driven semi-inductive analysis approach to let the categories emerge organically. Focusing on the development needs of vocational education CP to improve their mentoring skills, this paper presents the results of our research, namely the importance of adequate training, better support offered by university supervisors, greater recognition of their role, and specific time slots dedicated to trainee support. The knowledge resulting from this research could improve the quality of support for trainee teachers in vocational education settings and to a more successful induction into the workplace. This communication also presents recommendations regarding the development of training systems that meet the specific needs of vocational education CP.

Keywords: development of competencies, cooperating teacher, mentoring trainee teacher, practical training, vocational education

Procedia PDF Downloads 113
65 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 111
64 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens

Authors: R. Tamborrino, F. Rinaudo

Abstract:

Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.

Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities

Procedia PDF Downloads 190
63 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 73
62 The International Legal Protection of Foreign Investment Through Bilateral Investment Treaties and Double Taxation Treaties in the Context of International Investment Law and International Tax Law

Authors: Abdulmajeed Abdullah Alqarni

Abstract:

This paper is devoted a study of the current frameworks applicable to foreign investments at the levels of domestic and international law, with a particular focus on the legitimate balance to be achieved between the rights of the host state and the legal protections owed to foreign investors. At the wider level of analysis, the paper attempts to map and critically examine the relationship between foreign investment and economic development. In doing so, the paper offers a study in how current discourses and practices on investment law can reconcile the competing interests of developing and developed countries. The study draws on the growing economic imperative for developing nations to create a favorable investment climate capable of attracting private foreign investment. It notes that that over the past decades, an abundance of legal standards that establish substantive and procedural protections for legal forms of foreign investments in the host countries have evolved and crystalized. The study then goes on to offer a substantive analysis of legal reforms at the domestic level in countries such as Saudi Arabia before going on to provide an in- depth and substantive examination of the most important instruments developed at the levels of international law: bilateral investment agreements and double taxation agreements. As to its methods, the study draws on case studies and from data assessing the link between double taxation and economic development. Drawing from the extant literature and doctrinal research, and international and comparative jurisprudence, the paper excavates and critically examines contemporary definitions and norms of international investment law, many of which have been given concrete form and specificity in an ever-expanding number of bilateral and multilateral investment treaties. By reconsidering the wider challenges of conflicts of law and jurisdiction, and the competing aims of the modern investment law regime, the study reflects on how bilateral investment treaties might succeed in achieving the dual aims of rights protection and economic sovereignty. Through its examination of the double taxation phenomena, the study goes on to identify key practical challenges raised by the implementation of bilateral treaties whilst also assessing the sufficiency of the domestic and international legal solutions that are proposed in response. In its final analysis, the study aims to contribute to existing scholarship by assessing contemporary legal and economic barriers to the free flow of investment with due regard for the legitimate concerns and diversity of developing nations. It does by situating its analysis of the domestic enforcement of international investment instrument in its wider historical and normative context. By focusing on the economic and legal dimensions of foreign investment, the paper also aims to offer an interdisciplinary and holistic perspective on contemporary issues and developments in investment law while offering practical reform proposals that can be used to be achieve a more equitable balance between the rights and interests of states and private entities in an increasingly trans nationalized sphere of investment regulation and treaty arbitration.

Keywords: foreign investment, bilateral investment treaties, international tax law, double taxation treaties

Procedia PDF Downloads 87
61 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 117
60 New Findings on the Plasma Electrolytic Oxidation (PEO) of Aluminium

Authors: J. Martin, A. Nominé, T. Czerwiec, G. Henrion, T. Belmonte

Abstract:

The plasma electrolytic oxidation (PEO) is a particular electrochemical process to produce protective oxide ceramic coatings on light-weight metals (Al, Mg, Ti). When applied to aluminum alloys, the resulting PEO coating exhibit improved wear and corrosion resistance because thick, hard, compact and adherent crystalline alumina layers can be achieved. Several investigations have been carried out to improve the efficiency of the PEO process and one particular way consists in tuning the suitable electrical regime. Despite the considerable interest in this process, there is still no clear understanding of the underlying discharge mechanisms that make possible metal oxidation up to hundreds of µm through the ceramic layer. A key parameter that governs the PEO process is the numerous short-lived micro-discharges (micro-plasma in liquid) that occur continuously over the processed surface when the high applied voltage exceeds the critical dielectric breakdown value of the growing ceramic layer. By using a bipolar pulsed current to supply the electrodes, we previously observed that micro-discharges are delayed with respect to the rising edge of the anodic current. Nevertheless, explanation of the origin of such phenomena is still not clear and needs more systematic investigations. The aim of the present communication is to identify the relationship that exists between this delay and the mechanisms responsible of the oxide growth. For this purpose, the delay of micro-discharges ignition is investigated as the function of various electrical parameters such as the current density (J), the current pulse frequency (F) and the anodic to cathodic charge quantity ratio (R = Qp/Qn) delivered to the electrodes. The PEO process was conducted on Al2214 aluminum alloy substrates in a solution containing potassium hydroxide [KOH] and sodium silicate diluted in deionized water. The light emitted from micro-discharges was detected by a photomultiplier and the micro-discharge parameters (number, size, life-time) were measured during the process by means of ultra-fast video imaging (125 kfr./s). SEM observations and roughness measurements were performed to characterize the morphology of the elaborated oxide coatings while XRD was carried out to evaluate the amount of corundum -Al203 phase. Results show that whatever the applied current waveform, the delay of micro-discharge appearance increases as the process goes on. Moreover, the delay is shorter when the current density J (A/dm2), the current pulse frequency F (Hz) and the ratio of charge quantity R are high. It also appears that shorter delays are associated to stronger micro-discharges (localized, long and large micro-discharges) which have a detrimental effect on the elaborated oxide layers (thin and porous). On the basis of the results, a model for the growth of the PEO oxide layers will be presented and discussed. Experimental results support that a mechanism of electrical charge accumulation at the oxide surface / electrolyte interface takes place until the dielectric breakdown occurs and thus until micro-discharges appear.

Keywords: aluminium, micro-discharges, oxidation mechanisms, plasma electrolytic oxidation

Procedia PDF Downloads 262
59 Border Security: Implementing the “Memory Effect” Theory in Irregular Migration

Authors: Iliuta Cumpanasu, Veronica Oana Cumpanasu

Abstract:

This paper focuses on studying the conjunction between the new emerged theory of “Memory Effect” in Irregular Migration and Related Criminality and the notion of securitization, and its impact on border management, bringing about a scientific advancement in the field by identifying the patterns corresponding to the linkage of the two concepts, for the first time, and developing a theoretical explanation, with respect to the effects of the non-military threats on border security. Over recent years, irregular migration has experienced a significant increase worldwide. The U.N.'s refugee agency reports that the number of displaced people is at its highest ever - surpassing even post-World War II numbers when the world was struggling to come to terms with the most devastating event in history. This is also the fresh reality within the core studied coordinate, the Balkan Route of Irregular Migration, which starts from Asia and Africa and continues to Turkey, Greece, North Macedonia or Bulgaria, Serbia, and ends in Romania, where thousands of migrants find themselves in an irregular situation concerning their entry to the European Union, with its important consequences concerning the related criminality. The data from the past six years was collected by making use of semi-structured interviews with experts in the field of migration and desk research within some organisations involved in border security, pursuing the gathering of genuine insights from the aforementioned field, which was constantly addressed the existing literature and subsequently subjected to the mixed methods of analysis, including the use of the Vector Auto-Regression estimates model. Thereafter, the analysis of the data followed the processes and outcomes in Grounded Theory, and a new Substantive Theory emerged, explaining how the phenomena of irregular migration and cross-border criminality are the decisive impetus for implementing the concept of securitization in border management by using the proposed pattern. The findings of the study are therefore able to capture an area that has not yet benefitted from a comprehensive approach in the scientific community, such as the seasonality, stationarity, dynamics, predictions, or the pull and push factors in Irregular Migration, also highlighting how the recent ‘Pandemic’ interfered with border security. Therefore, the research uses an inductive revelatory theoretical approach which aims at offering a new theory in order to explain a phenomenon, triggering a practically handy contribution for the scientific community, research institutes or Academia and also usefulness to organizational practitioners in the field, among which UN, IOM, UNHCR, Frontex, Interpol, Europol, or national agencies specialized in border security. The scientific outcomes of this study were validated on June 30, 2021, when the author defended his dissertation for the European Joint Master’s in Strategic Border Management, a two years prestigious program supported by the European Commission and Frontex Agency and a Consortium of six European Universities and is currently one of the research objectives of his pending PhD research at the West University Timisoara.

Keywords: migration, border, security, memory effect

Procedia PDF Downloads 91
58 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters

Authors: Jyoti Sahu, Vinay A. Juvekar

Abstract:

Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.

Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature

Procedia PDF Downloads 391
57 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 239
56 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model

Authors: Muhammad Karim Ahmadzai

Abstract:

Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.

Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis

Procedia PDF Downloads 115
55 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method

Authors: Jiahui You, Kyung Jae Lee

Abstract:

Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.

Keywords: reactive-transport , Shale, Kerogen, precipitation

Procedia PDF Downloads 162
54 Sonication as a Versatile Tool for Photocatalysts’ Synthesis and Intensification of Flow Photocatalytic Processes Within the Lignocellulose Valorization Concept

Authors: J. C. Colmenares, M. Paszkiewicz-Gawron, D. Lomot, S. R. Pradhan, A. Qayyum

Abstract:

This work is a report of recent selected experiments of photocatalysis intensification using flow microphotoreactors (fabricated by an ultrasound-based technique) for photocatalytic selective oxidation of benzyl alcohol (BnOH) to benzaldehyde (PhCHO) (in the frame of the concept of lignin valorization), and the proof of concept of intensifying a flow selective photocatalytic oxidation process by acoustic cavitation. The synthesized photocatalysts were characterized by using different techniques such as UV-Vis diffuse reflectance spectroscopy, X-ray diffraction, nitrogen sorption, thermal gravimetric analysis, and transmission electron microscopy. More specifically, the work will be on: a Design and development of metal-containing TiO₂ coated microflow reactor for photocatalytic partial oxidation of benzyl alcohol: The current work introduces an efficient ultrasound-based metal (Fe, Cu, Co)-containing TiO₂ deposition on the inner walls of a perfluoroalkoxy alkanes (PFA) microtube under mild conditions. The experiments were carried out using commercial TiO₂ and sol-gel synthesized TiO₂. The rough surface formed during sonication is the site for the deposition of these nanoparticles in the inner walls of the microtube. The photocatalytic activities of these semiconductor coated fluoropolymer based microreactors were evaluated for the selective oxidation of BnOH to PhCHO in the liquid flow phase. The analysis of the results showed that various features/parameters are crucial, and by tuning them, it is feasible to improve the conversion of benzyl alcohol and benzaldehyde selectivity. Among all the metal-containing TiO₂ samples, the 0.5 at% Fe/TiO₂ (both, iron and titanium, as cheap, safe, and abundant metals) photocatalyst exhibited the highest BnOH conversion under visible light (515 nm) in a microflow system. This could be explained by the higher crystallite size, high porosity, and flake-like morphology. b. Designing/fabricating photocatalysts by a sonochemical approach and testing them in the appropriate flow sonophotoreactor towards sustainable selective oxidation of key organic model compounds of lignin: Ultrasonication (US)-assitedprecipitaion and US-assitedhydrosolvothermal methods were used for the synthesis of metal-oxide-based and metal-free-carbon-based photocatalysts, respectively. Additionally, we report selected experiments of intensification of a flow photocatalytic selective oxidation through the use of ultrasonic waves. The effort of our research is focused on the utilization of flow sonophotocatalysis for the selective transformation of lignin-based model molecules by nanostructured metal oxides (e.g., TiO₂), and metal-free carbocatalysts. A plethora of parameters that affects the acoustic cavitation phenomena, and as a result the potential of sonication were investigated (e.g. ultrasound frequency and power). Various important photocatalytic parameters such as the wavelength and intensity of the irradiated light, photocatalyst loading, type of solvent, mixture of solvents, and solution pH were also optimized.

Keywords: heterogeneous photo-catalysis, metal-free carbonaceous materials, selective redox flow sonophotocatalysis, titanium dioxide

Procedia PDF Downloads 101
53 Colocalization Analysis to Understand Yttrium Uptake in Saxifraga paniculata Using Complementary Imaging Technics

Authors: Till Fehlauer, Blanche Collin, Bernard Angeletti, Andrea Somogyi, Claire Lallemand, Perrine Chaurand, Cédric Dentant, Clement Levard, Jerome Rose

Abstract:

Over the last decades, yttrium (Y) has gained importance in high-tech applications. It is an essential part of alloys and compounds used for lasers, displays, or cell phones, for example. Due to its chemical similarities with the lanthanides, Y is often considered a rare earth element (REE). Despite their increased usage, the environmental behavior of REEs remains poorly understood. Especially regarding their interactions with plants, many uncertainties exist. On the one hand, Y is known to have a negative effect on root development and germination, but on the other hand, it appears to promote plant growth at low concentrations. In order to understand these phenomena, a precise knowledge is necessary about how Y is absorbed by the plant and how it is handled once inside the organism. Contradictory studies exist, stating that due to a similar ionic radius, Y and the other REEs might be absorbed through Ca²⁺-channels, while others suspect that Y has a shared pathway with Al³⁺. In this study, laser ablation coupled ICP-MS, and synchrotron-based micro-X-ray fluorescence (µXRF, beamline Nanoscopium, SOLEIL, France) have been used in order to localize Y within the plant tissue and identify associated elements. The plant used in this study is Saxifraga paniculata, a rugged alpine plant that has shown an affinity for Y in previous studies (in prep.). Furthermore, Saxifraga paniculata performs guttation, which means that it possesses phloem sap secreting openings on the leaf surface that serve to regulate root pressure. These so-called hydathodes could provide special insights in elemental transport in plants. The plants have been grown on Y doped soil (500mg/kg DW) for four months. The results showed that Y was mainly concentrated in the roots of Saxifraga paniculata (260 ± 85mg/kg), and only a small amount was translocated to the leaves (10 ± 7.8mg/kg). µXRF analysis indicated that within the root transects, the majority of Y remained in the epidermis and hardly penetrated the stele. Laser ablation coupled ICP-MS confirmed this finding and showed a positive correlation in the roots between Y, Fe, Al, and to a lesser extent Ca. In the stem transect, Y was mainly detected in a hotspot of approximately 40µm in diameter situated in the endodermis area. Within the stem and especially in the hotspot, Y was highly colocalized with Al and Fe. Similar-sized Y hotspots have been detected in/on the leaves. All of them were strongly colocalized with Al and Fe, except for those situated within the hydathodes, which showed no colocalization with any of the measured elements. Accordingly, a relation between Y and Ca during root uptake remains possible, whereas a correlation to Fe and Al appears to be dominant in the aerial parts, suggesting common storage compartments, the formation of complexes, or a shared pathway during translocation.

Keywords: laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), Phytoaccumulation, Rare earth elements, Saxifraga paniculata, Synchrotron-based micro-X-ray fluorescence, Yttrium

Procedia PDF Downloads 148
52 Nonequilibrium Effects in Photoinduced Ultrafast Charge Transfer Reactions

Authors: Valentina A. Mikhailova, Serguei V. Feskov, Anatoly I. Ivanov

Abstract:

In the last decade the nonequilibrium charge transfer have attracted considerable interest from the scientific community. Examples of such processes are the charge recombination in excited donor-acceptor complexes and the intramolecular electron transfer from the second excited electronic state. In these reactions the charge transfer proceeds predominantly in the nonequilibrium mode. In the excited donor-acceptor complexes the nuclear nonequilibrium is created by the pump pulse. The intramolecular electron transfer from the second excited electronic state is an example where the nuclear nonequilibrium is created by the forward electron transfer. The kinetics of these nonequilibrium reactions demonstrate a number of peculiar properties. Most important from them are: (i) the absence of the Marcus normal region in the free energy gap law for the charge recombination in excited donor-acceptor complexes, (ii) extremely low quantum yield of thermalized charge separated state in the ultrafast charge transfer from the second excited state, (iii) the nonexponential charge recombination dynamics in excited donor-acceptor complexes, (iv) the dependence of the charge transfer rate constant on the excitation pulse frequency. This report shows that most of these kinetic features can be well reproduced in the framework of stochastic point-transition multichannel model. The model involves an explicit description of the nonequilibrium excited state formation by the pump pulse and accounts for the reorganization of intramolecular high-frequency vibrational modes, for their relaxation as well as for the solvent relaxation. The model is able to quantitatively reproduce complex nonequilibrium charge transfer kinetics observed in modern experiments. The interpretation of the nonequilibrium effects from a unified point of view in the terms of the multichannel point transition stochastic model allows to see similarities and differences of electron transfer mechanism in various molecular donor-acceptor systems and formulates general regularities inherent in these phenomena. The nonequilibrium effects in photoinduced ultrafast charge transfer which have been studied for the last 10 years are analyzed. The methods of suppression of the ultrafast charge recombination, similarities and dissimilarities of electron transfer mechanism in different molecular donor-acceptor systems are discussed. The extremely low quantum yield of the thermalized charge separated state observed in the ultrafast charge transfer from the second excited state in the complex consisting of 1,2,4-trimethoxybenzene and tetracyanoethylene in acetonitrile solution directly demonstrates that its effectiveness can be close to unity. This experimental finding supports the idea that the nonequilibrium charge recombination in the excited donor-acceptor complexes can be also very effective so that the part of thermalized complexes is negligible. It is discussed the regularities inherent to the equilibrium and nonequilibrium reactions. Their fundamental differences are analyzed. Namely the opposite dependencies of the charge transfer rates on the dynamical properties of the solvent. The increase of the solvent viscosity results in decreasing the thermal rate and vice versa increasing the nonequilibrium rate. The dependencies of the rates on the solvent reorganization energy and the free energy gap also can considerably differ. This work was supported by the Russian Science Foundation (Grant No. 16-13-10122).

Keywords: Charge recombination, higher excited states, free energy gap law, nonequilibrium

Procedia PDF Downloads 321
51 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla

Authors: Roxana D. Maiorescu-Murphy

Abstract:

In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.

Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk

Procedia PDF Downloads 91
50 Concentration of Droplets in a Transient Gas Flow

Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal

Abstract:

The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.

Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations

Procedia PDF Downloads 257
49 The Temperature Degradation Process of Siloxane Polymeric Coatings

Authors: Andrzej Szewczak

Abstract:

Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.

Keywords: silicones, siloxanes, surface hardness, temperature, water absorption

Procedia PDF Downloads 242
48 The Real Ambassador: How Hip Hop Culture Connects and Educates across Borders

Authors: Frederick Gooding

Abstract:

This paper explores how many Hip Hop artists have intentionally and strategically invoked sustainability principles of people, planet and profits as a means to create community, compensate for and cope with structural inequalities in society. These themes not only create community within one's country, but the powerful display and demonstration of these narratives create community on a global plane. Listeners of Hip Hop are therefore able to learn about the political events occurring in another country free of censure, and establish solidarity worldwide. Hip Hop therefore can be an ingenious tool to create self-worth, recycle positive imagery, and serve as a defense mechanism from institutional and structural forces that conspire to make an upward economic and social trajectory difficult, if not impossible for many people of color, all across the world. Although the birthplace of Hip Hop, the United States of America, is still predominately White, it has undoubtedly grown more diverse at a breath-­taking pace in recent decades. Yet, whether American mainstream media will fully reflect America’s newfound diversity remains to be seen. As it stands, American mainstream media is seen and enjoyed by diverse audiences not just in America, but all over the world. Thus, it is imperative that further inquiry is conducted about one of the fastest growing genres within one of the world’s largest and most influential media industries generating upwards of $10 billion annually. More importantly, hip hop, its music and associated culture collectively represent a shared social experience of significant value. They are important tools used both to inform and influence economic, social and political identity. Conversely, principles of American exceptionalism often prioritize American political issues over those of others, thereby rendering a myopic political view within the mainstream. This paper will therefore engage in an international contextualization of the global phenomena entitled Hip Hop by exploring the creative genius and marketing appeal of Hip Hop within the global context of information technology, political expression and social change in addition to taking a critical look at historically racialized imagery within mainstream media. Many artists the world over have been able to freely express themselves and connect with broader communities outside of their own borders, all through the sound practice of the craft of Hip Hop. An empirical understanding of political, social and economic forces within the United States will serve as a bridge for identifying and analyzing transnational themes of commonality for typically marginalized or disaffected communities facing similar struggles for survival and respect. The sharing of commonalities of marginalized cultures not only serves as a source of education outside of typically myopic, mainstream sources, but it also creates transnational bonds globally to the extent that practicing artists resonate with many of the original themes of (now mostly underground) Hip Hop as with many of the African American artists responsible for creating and fostering Hip Hop's powerful outlet of expression. Hip Hop's power of connectivity and culture-sharing transnationally across borders provides a key source of education to be taken seriously by academics.

Keywords: culture, education, global, hip hop, mainstream music, transnational

Procedia PDF Downloads 100
47 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 326
46 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 314
45 A Long-Standing Methodology Quest Regarding Commentary of the Qur’an: Modern Debates on Function of Hermeneutics in the Quran Scholarship in Turkey

Authors: Merve Palanci

Abstract:

This paper aims to reveal and analyze methodology debates on Qur’an Commentary in Turkish Scholarship and to make sound inductions on the current situation, with reference to the literature evolving around the credibility of Hermeneutics when the case is Qur’an commentary and methodological connotations related to it, together with the other modern approaches to the Qur’an. It is fair to say that Tafseer, constituting one of the main parts of basic Islamic sciences, has drawn great attention from both Muslim and non-Muslim scholars for a long time. And with the emplacement of an acute junction between natural sciences and social sciences in the post-enlightenment period, this interest seems to pave the way for methodology discussions that are conducted by theology spheres, occupying a noticeable slot in Tafseer literature, as well. A panoramic glance at the classical treatise in relation to the methodology of Tafseer, namely Usul al-Tafseer, leads the reader to the conclusion that these classics are intrinsically aimed at introducing the Qur’an and its early history of formation as a corpus and providing a better understanding of its content. To illustrate, the earliest methodology work extant for Qur’an commentary, al- Aql wa’l Fahm al- Qur’an by Harith al-Muhasibi covers content that deals with Qur’an’s rhetoric, its muhkam and mutashabih, and abrogation, etc. And most of the themes in question are evident to share a common ground: understanding the Scripture and producing an accurate commentary to be built on this preliminary phenomenon of understanding. The content of other renowned works in an overtone of Tafseer methodology, such as Funun al Afnan, al- Iqsir fi Ilm al- Tafseer, and other succeeding ones al- Itqan and al- Burhan is also rich in hints related to preliminary phenomena of understanding. However, these works are not eligible for being classified as full-fledged methodology manuals assuring a true understanding of the Qur’an. And Hermeneutics is believed to supply substantial data applicable to Qur’an commentary as it deals with the nature of understanding itself. Referring to the latest tendencies in Tafseer methodology, this paper envisages to centralize hermeneutical debates in modern scholarship of Qur’an commentary and the incentives that lead scholars to apply for Hermeneutics in Tafseer literature. Inspired from these incentives, the study involves three parts. In the introduction part, this paper introduces key features of classical methodology works in general terms and traces back the main methodological shifts of modern times in Qur’an commentary. To this end, revisionist Ecole, scientific Qur’an commentary ventures, and thematic Qur’an commentary are included and analysed briefly. However, historical-critical commentary on the Quran, as it bears a close relationship with hermeneutics, is handled predominantly. The second part is based on the hermeneutical nature of understanding the Scripture, revealing a timeline for the beginning of hermeneutics debates in Tafseer, and Fazlur Rahman’s(d.1988) influence will be manifested for establishing a theoretical bridge. In the following part, reactions against the application of Hermeneutics in Tafseer activity and pro-hermeneutics works will be revealed through cross-references to the prominent figures of both, and the literature in question in theology scholarship in Turkey will be explored critically.

Keywords: hermeneutics, Tafseer, methodology, Ulum al- Qur’an, modernity

Procedia PDF Downloads 72
44 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 164
43 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 21
42 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development

Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas

Abstract:

One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.

Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development

Procedia PDF Downloads 315