Search results for: gas transfer
226 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites
Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan
Abstract:
All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite
Procedia PDF Downloads 100225 The Effectiveness of Multiphase Flow in Well- Control Operations
Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia
Abstract:
Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic
Procedia PDF Downloads 119224 Teacher Professional Development in Saudi Arabia through the Implementation of Universal Design for Learning
Authors: Majed A. Alsalem
Abstract:
Universal Design for Learning (UDL) is common theme in education across the US and an influential model and framework that enables students in general and particularly students who are deaf and hard of hearing (DHH) to access the general education curriculum. UDL helps teachers determine how information will be presented to students and how to keep students engaged. Moreover, UDL helps students to express their understanding and knowledge to others. UDL relies on technology to promote students' interaction with content and their communication of knowledge. This study included 120 DHH students who received daily instruction based on UDL principles. This study presents the results of the study and discusses its implications for the integration of UDL in day-to-day practice as well as in the country's education policy. UDL is a Western concept that began and grew in the US, and it has just begun to transfer to other countries such as Saudi Arabia. It will be very important to researchers, practitioners, and educators to see how UDL is being implemented in a new place with a different culture. UDL is a framework that is built to provide multiple means of engagement, representation, and action and expression that should be part of curricula and lessons for all students. The purpose of this study is to investigate the variables associated with the implementation of UDL in Saudi Arabian schools and identify the barriers that could prevent the implementation of UDL. Therefore, this study used a mixed methods design that use both quantitative and qualitative methods. More insights will be gained by including both quantitative and qualitative rather than using a single method. By having methods that different concepts and approaches, the databases will be enriched. This study uses levels of collecting date through two stages in order to insure that the data comes from multiple ways to mitigate validity threats and establishing trustworthiness in the findings. The rationale and significance of this study is that it will be the first known research that targets UDL in Saudi Arabia. Furthermore, it will deal with UDL in depth to set the path for further studies in the Middle East. From a perspective of content, this study considers teachers’ implementation knowledge, skills, and concerns of implementation. This study deals with effective instructional designs that have not been presented in any conferences, workshops, teacher preparation and professional development programs in Saudi Arabia. Specifically, Saudi Arabian schools are challenged to design inclusive schools and practices as well as to support all students’ academic skills development. The total participants in stage one were 336 teachers of DHH students. The results of the intervention indicated significant differences among teachers before and after taking the training sessions associated with their understanding and level of concern. Teachers have indicated interest in knowing more about UDL and adopting it into their practices; they reported that UDL has benefits that will enhance their performance for supporting student learning.Keywords: deaf and hard of hearing, professional development, Saudi Arabia, universal design for learning
Procedia PDF Downloads 432223 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 95222 A Study on the Acquisition of Chinese Classifiers by Vietnamese Learners
Authors: Quoc Hung Le Pham
Abstract:
In the field of language study, classifier is an interesting research feature. In the world’s languages, some languages have classifier system, some do not. Mandarin Chinese and Vietnamese languages are a rich classifier system, however, because of the language system, the cognitive, cultural differences, so that the syntactic structure of classifier of them also dissimilar. When using Mandarin Chinese classifiers must collocate with nouns or verbs, in the lexical category it is not like nouns or verbs, belong to the open class. But some scholars believe that Mandarin Chinese measure words are similar to English and other Indo European languages. The word hanging on the structure and word formation (suffix), is a closed class. Compared to other languages, such as Chinese, Vietnamese, Thai and other Asian languages are still belonging to the classifier language’s second type, this type of language is classifier, it is in the majority of quantity must exist, and following deictic, anaphoric or quantity appearing together, not separation between its modified noun, also known as numeral classifier language. Main syntactic structure of Chinese classifiers are as follows: ‘quantity+measure+noun’, ‘pronoun+measure+noun’, ‘pronoun+quantity+measure+noun’, ‘prefix+quantity+measure +noun’, ‘quantity +adjective + measure +noun’, ‘ quantity (above 10 whole number), + duo (多)measure +noun’, ‘ quantity (around 10) + measure + duo (多) +noun’. Main syntactic structure of Vietnamese classifiers are: ‘quantity+measure+noun’, ‘ measure+noun+pronoun’, ‘quantity+measure+noun+pronoun’, ‘measure+noun+prefix+ quantity’, ‘quantity+measure+noun+adjective', ‘duo (多) +quanlity+measure+noun’, ‘quantity+measure+adjective+pronoun (quantity word could not be 1)’, ‘measure+adjective+pronoun’, ‘measure+pronoun’. In daily life, classifiers are commonly used, if Chinese learners failed to standardize this using catergory, because the negative impact might occur on their verbal communication. The richness of the Chinese classifier system contributes to the complexity in the study of the system by foreign learners, especially in the inter language of Vietnamese learners. As above mentioned, Vietnamese language also has a rich system of classifiers, however, the basic structure order of two languages are similar but both still have differences. These similarities and dissimilarities between Chinese and Vietnamese classifier systems contribute significantly to the common errors made by Vietnamese students while they acquire Chinese, which are distinct from the errors made by students from the other language background. This article from a comparative perspective of language, has an orientation towards Chinese and Vietnamese languages commonly used in classifiers semantics and structural form two aspects. This comparative study aims to identity Vietnamese students while learning Chinese classifiers may face some negative transference of mother language, beside that through the analysis of the classifiers questionnaire, find out the causes and patterns of the errors they made. As the preliminary analysis shows, Vietnamese students while learning Chinese classifiers made some errors such as: overuse classifier ‘ge’(个); misuse the other classifiers ‘*yi zhang ri ji’(yi pian ri ji), ‘*yi zuo fang zi’(yi jian fang zi), ‘*si zhang jin pai’(si mei jin pai); homonym words ‘dui, shuang, fu, tao’ (对、双、副、套), ‘ke, li’ (颗、粒).Keywords: acquisition, classifiers, negative transfer, Vietnamse learners
Procedia PDF Downloads 453221 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN
Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran
Abstract:
In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS
Procedia PDF Downloads 36220 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs
Authors: Lexi Li
Abstract:
This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.Keywords: EFL textbooks, learner corpus, modal verbs, native corpus
Procedia PDF Downloads 124219 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk
Authors: Moses Jenkins
Abstract:
Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.Keywords: insulation, condensation, masonry, historic
Procedia PDF Downloads 173218 Modification of Magneto-Transport Properties of Ferrimagnetic Mn₄N Thin Films by Ni Substitution and Their Magnetic Compensation
Authors: Taro Komori, Toshiki Gushi, Akihito Anzai, Taku Hirose, Kaoru Toko, Shinji Isogami, Takashi Suemasu
Abstract:
Ferrimagnetic antiperovskite Mn₄₋ₓNiₓN thin film exhibits both small saturation magnetization and rather large perpendicular magnetic anisotropy (PMA) when x is small. Both of them are suitable features for application to current induced domain wall motion devices using spin transfer torque (STT). In this work, we successfully grew antiperovskite 30-nm-thick Mn₄₋ₓNiₓN epitaxial thin films on MgO(001) and STO(001) substrates by MBE in order to investigate their crystalline qualities and magnetic and magneto-transport properties. Crystalline qualities were investigated by X-ray diffraction (XRD). The magnetic properties were measured by vibrating sample magnetometer (VSM) at room temperature. Anomalous Hall effect was measured by physical properties measurement system. Both measurements were performed at room temperature. Temperature dependence of magnetization was measured by VSM-Superconducting quantum interference device. XRD patterns indicate epitaxial growth of Mn₄₋ₓNiₓN thin films on both substrates, ones on STO(001) especially have higher c-axis orientation thanks to greater lattice matching. According to VSM measurement, PMA was observed in Mn₄₋ₓNiₓN on MgO(001) when x ≤ 0.25 and on STO(001) when x ≤ 0.5, and MS decreased drastically with x. For example, MS of Mn₃.₉Ni₀.₁N on STO(001) was 47.4 emu/cm³. From the anomalous Hall resistivity (ρAH) of Mn₄₋ₓNiₓN thin films on STO(001) with the magnetic field perpendicular to the plane, we found out Mr/MS was about 1 when x ≤ 0.25, which suggests large magnetic domains in samples and suitable features for DW motion device application. In contrast, such square curves were not observed for Mn₄₋ₓNiₓN on MgO(001), which we attribute to difference in lattice matching. Furthermore, it’s notable that although the sign of ρAH was negative when x = 0 and 0.1, it reversed positive when x = 0.25 and 0.5. The similar reversal occurred for temperature dependence of magnetization. The magnetization of Mn₄₋ₓNiₓN on STO(001) increases with decreasing temperature when x = 0 and 0.1, while it decreases when x = 0.25. We considered that these reversals were caused by magnetic compensation which occurred in Mn₄₋ₓNiₓN between x = 0.1 and 0.25. We expect Mn atoms of Mn₄₋ₓNiₓN crystal have larger magnetic moments than Ni atoms do. The temperature dependence stated above can be explained if we assume that Ni atoms preferentially occupy the corner sites, and their magnetic moments have different temperature dependence from Mn atoms at the face-centered sites. At the compensation point, Mn₄₋ₓNiₓN is expected to show very efficient STT and ultrafast DW motion with small current density. What’s more, if angular momentum compensation is found, the efficiency will be best optimized. In order to prove the magnetic compensation, X-ray magnetic circular dichroism will be performed. Energy dispersive X-ray spectrometry is a candidate method to analyze the accurate composition ratio of samples.Keywords: compensation, ferrimagnetism, Mn₄N, PMA
Procedia PDF Downloads 135217 The Politics of Foreign Direct Investment for Socio-Economic Development in Nigeria: An Assessment of the Fourth Republic Strategies (1999 - 2014)
Authors: Muritala Babatunde Hassan
Abstract:
In the contemporary global political economy, foreign direct investment (FDI) is gaining currency on daily basis. Notably, the end of the Cold War has brought about the dominance of neoliberal ideology with its mantra of private-sector-led economy. As such, nation-states now see FDI attraction as an important element in their approach to national development. Governments and policy makers are preoccupying themselves with unraveling the best strategies to not only attract more FDI but also to attain the desired socio-economic development status. In Nigeria, the perceived development potentials of FDI have brought about aggressive hunt for foreign investors, most especially since transition to civilian rule in May 1999. Series of liberal and market oriented strategies are being adopted not only to attract foreign investors but largely to stimulate private sector participation in the economy. It is on this premise that this study interrogates the politics of FDI attraction for domestic development in Nigeria between 1999 and 2014, with the ultimate aim of examining the nexus between regime type and the ability of a state to attract and benefit from FDI. Building its analysis within the framework of institutional utilitarianism, the study posits that the essential FDI strategies for achieving the greatest happiness for the greatest number of Nigerians are political not economic. Both content analysis and descriptive survey methodology were employed in carrying out the study. Content analysis involves desk review of literatures that culminated in the development of the study’s conceptual and theoretical framework of analysis. The study finds no significant relationship between transition to democracy and FDI inflows in Nigeria, as most of the attracted investments during the period of the study were market and resource seeking as was the case during the military regime, thereby contributing minimally to the socio-economic development of the country. It is also found that the country placed much emphasis on liberalization and incentives for FDI attraction at the neglect of improving the domestic investment environment. Consequently, poor state of infrastructure, weak institutional capability and insecurity were identified as the major factors seriously hindering the success of Nigeria in exploiting FDI for domestic development. Given the reality of the currency of FDI as a vector of economic globalization and that Nigeria is trailing the line of private-sector-led approach to development, it is recommended that emphasis should be placed on those measures aimed at improving the infrastructural facilities, building solid institutional framework, enhancing skill and technological transfer and coordinating FDI promotion activities by different agencies and at different levels of government.Keywords: foreign capital, politics, socio-economic development, FDI attraction strategies
Procedia PDF Downloads 165216 An Appraisal of Mitigation and Adaptation Measures under Paris Agreement 2015: Developing Nations' Pie
Authors: Olubisi Friday Oluduro
Abstract:
The Paris Agreement 2015, the result of negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), after Kyoto Protocol expiration, sets a long-term goal of limiting the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels, and of pursuing efforts to limiting this temperature increase to 1.5 degrees Celsius. An advancement on the erstwhile Kyoto Protocol which sets commitments to only a limited number of Parties to reduce their greenhouse gas (GHGs) emissions, it includes the goal to increase the ability to adapt to the adverse impacts of climate change and to make finance flows consistent with a pathway towards low GHGs emissions. For it achieve these goals, the Agreement requires all Parties to undertake efforts towards reaching global peaking of GHG emissions as soon as possible and towards achieving a balance between anthropogenic emissions by sources and removals by sinks in the second half of the twenty-first century. In addition to climate change mitigation, the Agreement aims at enhancing adaptive capacity, strengthening resilience and reducing the vulnerability to climate change in different parts of the world. It acknowledges the importance of addressing loss and damage associated with the adverse of climate change. The Agreement also contains comprehensive provisions on support to be provided to developing countries, which includes finance, technology transfer and capacity building. To ensure that such supports and actions are transparent, the Agreement contains a number reporting provisions, requiring parties to choose the efforts and measures that mostly suit them (Nationally Determined Contributions), providing for a mechanism of assessing progress and increasing global ambition over time by a regular global stocktake. Despite the somewhat global look of the Agreement, it has been fraught with manifold limitations threatening its very existential capability to produce any meaningful result. Considering these obvious limitations some of which were the very cause of the failure of its predecessor—the Kyoto Protocol—such as the non-participation of the United States, non-payment of funds into the various coffers for appropriate strategic purposes, among others. These have left the developing countries largely threatened eve the more, being more vulnerable than the developed countries, which are really responsible for the climate change scourge. The paper seeks to examine the mitigation and adaptation measures under the Paris Agreement 2015, appraise the present situation since the Agreement was concluded and ascertain whether the developing countries have been better or worse off since the Agreement was concluded, and examine why and how, while projecting a way forward in the present circumstance. It would conclude with recommendations towards ameliorating the situation.Keywords: mitigation, adaptation, climate change, Paris agreement 2015, framework
Procedia PDF Downloads 157215 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer
Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz
Abstract:
Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions
Procedia PDF Downloads 145214 Ultrasonic Irradiation Synthesis of High-Performance Pd@Copper Nanowires/MultiWalled Carbon Nanotubes-Chitosan Electrocatalyst by Galvanic Replacement toward Ethanol Oxidation in Alkaline Media
Authors: Majid Farsadrouh Rashti, Amir Shafiee Kisomi, Parisa Jahani
Abstract:
The direct ethanol fuel cells (DEFCs) are contemplated as a promising energy source because, In addition to being used in portable electronic devices, it is also used for electric vehicles. The synthesis of bimetallic nanostructures due to their novel optical, catalytic and electronic characteristic which is precisely in contrast to their monometallic counterparts is attracting extensive attention. Galvanic replacement (sometimes is named to as cementation or immersion plating) is an uncomplicated and effective technique for making nanostructures (such as core-shell) of different metals, semiconductors, and their application in DEFCs. The replacement of galvanic does not need any external power supply compared to electrodeposition. In addition, it is different from electroless deposition because there is no need for a reducing agent to replace galvanizing. In this paper, a fast method for the palladium (Pd) wire nanostructures synthesis with the great surface area through galvanic replacement reaction utilizing copper nanowires (CuNWS) as a template by the assistance of ultrasound under room temperature condition is proposed. To evaluate the morphology and composition of Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan, emission scanning electron microscopy, energy dispersive X-ray spectroscopy were applied. In order to measure the phase structure of the electrocatalysts were performed via room temperature X-ray powder diffraction (XRD) applying an X-ray diffractometer. Various electrochemical techniques including chronoamperometry and cyclic voltammetry were utilized for the electrocatalytic activity of ethanol electrooxidation and durability in basic solution. Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst demonstrated substantially enhanced performance and long-term stability for ethanol electrooxidation in the basic solution in comparison to commercial Pd/C that demonstrated the potential in utilizing Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan as efficient catalysts towards ethanol oxidation. Noticeably, the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan presented excellent catalytic activities with a peak current density of 320.73 mAcm² which was 9.5 times more than in comparison to Pd/C (34.2133 mAcm²). Additionally, activation energy thermodynamic and kinetic evaluations revealed that the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst has lower compared to Pd/C which leads to a lower energy barrier and an excellent charge transfer rate towards ethanol oxidation.Keywords: core-shell structure, electrocatalyst, ethanol oxidation, galvanic replacement reaction
Procedia PDF Downloads 147213 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique
Authors: Stefano Iannello, Massimiliano Materazzi
Abstract:
Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray
Procedia PDF Downloads 172212 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 35211 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 127210 Funding Innovative Activities in Firms: The Ownership Structure and Governance Linkage - Evidence from Mongolia
Authors: Ernest Nweke, Enkhtuya Bavuudorj
Abstract:
The harsh realities of the scandalous failure of several notable corporations in the past two decades have inextricably resulted in a surge in corporate governance studies. Nevertheless, little or no attention has been paid to corporate governance studies in Mongolian firms and much less to the comprehension of the correlation among ownership structure, corporate governance mechanisms and trend of innovative activities. Innovation is the bed rock of enterprise success. However, the funding and support for innovative activities in many firms are to a great extent determined by the incentives provided by the firm’s internal and external governance mechanisms. Mongolia is an East Asian country currently undergoing a fast-paced transition from socialist to democratic system and it is a widely held view that private ownership as against public ownership fosters innovation. Hence, following the privatization policy of Mongolian Government which has led to the transfer of the ownership of hitherto state controlled and state directed firms to private individuals and organizations, expectations are high that sufficient motivation would be provided for firm managers to engage in innovative activities. This research focuses on the relationship between ownership structure, corporate governance on one hand and the level of innovation on the hand. The paper is empirical in nature and derives data from both reliable secondary and primary sources. Secondary data for the study was in respect of ownership structure of Mongolian listed firms and innovation trend in Mongolia generally. These were analyzed using tables, charts, bars and percentages. Personal interviews and surveys were held to collect primary data. Primary data was in respect of corporate governance practices in Mongolian firms and were collected using structured questionnaire. Out of a population of three hundred and twenty (320) companies listed on the Mongolian Stock Exchange (MSE), a sample size of thirty (30) randomly selected companies was utilized for the study. Five (5) management level employees were surveyed in each selected firm giving a total of one hundred and fifty (150) respondents. Data collected were analyzed and research hypotheses tested using Chi-Square test statistic. Research results showed that corporate governance mechanisms were better and have significantly improved overtime in privately held as opposed to publicly owned firms. Consequently, the levels of innovation in privately held firms were considerably higher. It was concluded that a significant and positive relationship exists between private ownership and good corporate governance on one hand and the level of funding provided for innovative activities in Mongolian firms on the other hand.Keywords: corporate governance, innovation, ownership structure, stock exchange
Procedia PDF Downloads 196209 The Implementation of Science Park Policy and Their Impacts on Regional Economic Development in Emerging Economy Country: Case of Thailand
Authors: Muttamas Wongwanich, John R. Bryson, Catherine E. Harris
Abstract:
Science parks are an essential component of localized innovation ecosystems. Science Parks have played a critical role in enhancing local innovation ecosystems in developed market economies. Attempts have been made to replicate best practice in other national contexts. To our best knowledge, the study about the development of Science Parks has not been undertaken on the economic impact on the developing countries. Further research is required to understand the adoption of Science Park policies in developing and emerging economies. This study explores the implementation of Science Park policy and its impacts on economic growth and development in Thailand, focusing on the relationship between universities and businesses. The Thailand context is essential. Thailand’s economy is dominated by agriculture and tourism. The Science Park policy is trying to develop an agriculturally orientated innovative ecosystem. Thailand established four Science Parks based on a policy that highlighted the importance of cooperation between government, HEIs, and businesses. These Science Parks are intended to increase small and medium enterprises’ (SMEs) innovativeness, employment, and regional economic growth by promoting collaboration and knowledge transfer between HEIs and the private sector. This study explores one regional Science Park in Thailand with an emphasis on understanding the implementation and operation of a triple helix innovation policy. The analysis explores the establishment of the Science Park and its impacts on firms and the regional economy through interviews with Science Parks directors, firms, academics, universities, and government officials. The analysis will inform Science Park policy development in Thailand to support the national objective to develop an innovation ecosystem based on the integration of technology with innovation policy, supporting technology-based SMEs in the creation of local jobs. The finding shows that the implementation of the Science Park policy in Thailand requires support and promotion from the government. The regional development plan must be related to the regional industry development strategy, considering the strengths and weaknesses of local entrepreneurs. The long time in granting a patent is the major obstacle in achieving the government’s aim in encouraging local economic activity. The regional Science Parks in Thailand are at the early stage of the operation plan. Thus, the impact on the regional economy cannot be measured and need further investigation in a more extended period. However, local businesses realize the vital of research and development (R&D). There have been more requests for funding support in doing R&D. Furthermore, there is the creation of linkages between businesses, HEIs, and government authorities as expected.Keywords: developing country, emerging economy, regional development, science park, Thailand, triple helix
Procedia PDF Downloads 153208 The Effect of Disseminating Basic Knowledge on Radiation in Emergency Distance Learning of COVID-19
Authors: Satoko Yamasaki, Hiromi Kawasaki, Kotomi Yamashita, Susumu Fukita, Kei Sounai
Abstract:
People are susceptible to rumors when the cause of their health problems is unknown or invisible. In order for individuals to be unaffected by rumors, they need basic knowledge and correct information. Community health nursing classes use cases where basic knowledge of radiation can be utilized on a regular basis, thereby teaching that basic knowledge is important in preventing anxiety caused by rumors. Nursing students need to learn that preventive activities are essential for public health nursing care. This is the same methodology used to reduce COVID-19 anxiety among individuals. This study verifies the learning effect concerning the basic knowledge of radiation necessary for case consultation by emergency distance learning. Sixty third-year nursing college students agreed to participate in this research. The knowledge tests conducted before and after classes were compared, with the chi-square test used for testing. There were five knowledge questions regarding distance lessons. This was considered to be 5% significant. The students’ reports which describe the results of responding to health consultations, were analyzed qualitatively and descriptively. In this case study, a person living in an area not affected by radiation was anxious about drinking water and, thus, consulted with a student. The contents of the lecture were selected the minimum amount of knowledge used for the answers of the consultant; specifically hot spots, internal exposure risk, food safety, characteristics of cesium-137, and precautions for counselors. Before taking the class, the most correctly answered question by students concerned daily behavior at risk of internal exposure (52.2%). The question with the fewest correct answers was the selection of places that are likely to be hot spots (3.4%). All responses increased significantly after taking the class (p < 0.001). The answers to the counselors, as written by the students, were 'Cesium is strongly bound to the soil, so it is difficult to transfer to water' and 'Water quality test results of tap water are posted on the city's website.' These were concrete answers obtained by using specialized knowledge. Even in emergency distance learning, the students gained basic knowledge regarding radiation and created a document to utilize said knowledge while assuming the situation concretely. It was thought that the flipped classroom method, even if conducted remotely, could maintain students' learning. It was thought that setting specific knowledge and scenes to be used would enhance the learning effect. By changing the case to concern that of the anxiety caused by infectious diseases, students may be able to effectively gain the basic knowledge to decrease the anxiety of residents due to infectious diseases.Keywords: effect of class, emergency distance learning, nursing student, radiation
Procedia PDF Downloads 114207 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning
Authors: R. Abdulrahman, A. Eardley, A. Soliman
Abstract:
The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.Keywords: mobile learning, nursing institute students’ acceptance of m-learning activity in Saudi Arabia, unified theory of acceptance and use of technology model (UTAUT), structural equation modelling (SEM)
Procedia PDF Downloads 188206 How Does Paradoxical Leadership Enhance Organizational Success?
Authors: Wageeh A. Nafei
Abstract:
This paper explores the role of Paradoxical Leadership (PL) in enhancing Organizational Success (OS) at private hospitals in Egypt. Based on the collected data from employees in private hospitals (doctors, nursing staff, and administrative staff). The researcher has adopted a sampling method to collect data for the study. The appropriate statistical methods, such as Alpha Correlation Coefficient (ACC), Confirmatory Factor Analysis (CFA), and Multiple Regression Analysis (MRA), are used to analyze the data and test the hypotheses. The research has reached a number of results, the most important of which are (1) there is a statistical relationship between the independent variable represented by PL and the dependent variable represented by Organizational Success (OS). The paradoxical leader encourages employees to express their opinions and builds a work environment characterized by flexibility and independence. Also, the paradoxical leader works to support specialized work teams, which leads to the creation of new ideas, on the one hand, and contributes to the achievement of outstanding performance on the other hand. (2) the mentality of the paradoxical leader is flexible and capable of absorbing all suggestions from all employees. Also, the paradoxical leader is interested in enhancing cooperation among them and provides an opportunity to transfer experience and increase knowledge-sharing. Also, the sharing of knowledge creates the necessary diversity that helps the organization to obtain rich external information and enables the organization to deal with a rapidly changing environment. (3) The PL approach helps in facing the paradoxical demands of employees. A paradoxical leader plays an important role in reducing the feeling of instability in the work environment and lack of job security, reducing negative feelings for employees, restoring balance in the work environment, improving the well-being of employees, and increasing the degree of job satisfaction of employees in the organization. The study referred to a number of recommendations, the most important of which are (1) the leaders of the organizations must listen to the views of employees and their needs and move away from the official method of control. The leader should give sufficient freedom to employees to participate in decision-making and maintain enough space among them. The treatment between the leaders and employees must be based on friendliness, (2) the need for organizational leaders to pay attention to sharing knowledge among employees through training courses. The leader should make sure that every information provided by the employee is valuable and useful, which can be used to solve a problem that may face his/her colleagues at work, (3) the need for organizational leaders to pay attention to sharing knowledge among employees through brainstorming sessions. The leader should ensure that employees obtain knowledge from their colleagues and share ideas and information among them. This is in addition to motivating employees to complete their work in a new creative way, which leads to employees’ not feeling bored of repeating the same routine procedures in the organization.Keywords: paradoxical leadership, organizational success, human resourece, management
Procedia PDF Downloads 58205 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation
Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes
Abstract:
Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor
Procedia PDF Downloads 137204 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 205203 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach
Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal
Abstract:
Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol
Procedia PDF Downloads 108202 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire
Authors: Vinay A. Sharma, Shiva Prasad H. C.
Abstract:
The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.Keywords: decision-making, leadership, logic, strategic management
Procedia PDF Downloads 108201 2017 Survey on Correlation between Connection and Emotions for Children and Adolescents
Authors: Ya-Hsing Yeh, I-Chun Tai, Ming-Chieh Lin, Li-Ting Lee, Ping-Ting Hsieh, Yi-Chen Ling, Jhia-Ying Du, Li-Ping Chang, Guan-Long Yu
Abstract:
Objective: To understand the connection between children/adolescents and those who they miss, as well as the correlation between connection and their emotions. Method: Based on the objective, a close-ended questionnaire was made into a formal questionnaire after experts evaluated its validity. In February 2017, the paper-based questionnaire was adopted. Twenty-one elementary schools and junior high schools in Taiwan were sampled by purposive sampling approach and the fifth to ninth graders were our participants. A total of 2,502 valid questionnaires were retrieved. Results: Forty-four-point three percent of children/adolescents missed a person in mind, or they thought a person as a significant other in mind, but they had no connection with them. The highest proportion of those they wanted to contact with was ‘Friends and classmates’, and the others were ‘immediate family’, such as parents and grandparents, and ‘academic or vocational instructors, such as home-room teachers, coaches, cram school teachers and so on, respectively. Only 14% of children/adolescents would actively contact those they missed. The proportion of what children/adolescents ‘often’ actively keeping in touch with those they missed felt happy or cheerful was higher compared with those who ‘seldom’ actively keeping in touch with people they missed whenever they recalled who they missed, or the person actively contacted with them. Sixty-one-point seven percent of participants haven’t connected with those they missed for more than one year. The main reason was ‘environmental factors’, such as school/class transfer or moving, and then ‘academic or personal factors’, ‘communication tools’, and ‘personalities’, respectively. In addition to ‘greetings during festivals and holidays’, ‘hearing from those they missed’, and ‘knowing the latest information about those they missed on their Internet communities’, children/adolescents would like to actively contact with them when they felt ‘happy’ and ‘depressed or frustrated. The first three opinions of what children/adolescents regarded truly connection were ‘listening to people they missed attentively’, ‘sharing their secrets’, and ‘contacting with people they regularly missed with real actions’. In terms of gender, girls’ proportion on ‘showing with actions, including contacting with people they missed regularly or expressing their feelings openly’, and ‘sharing secrets’ was higher than boys’, while boy’s proportion on ‘the attitudes when contacting people they missed, including listening attentively or without being distracted’ was higher than girls’. Conclusions: I. The more ‘active’ connection they have, the more happiness they feel. II. Teachers can teach children how to manage their emotions and express their feelings appropriately. III. It is very important to turn connection into ‘action.’ Teachers can set a good example and share their moods with others whatever they are in the mood. This is a kind of connection.Keywords: children, connection, emotion, mental health
Procedia PDF Downloads 154200 Modeling Sorption and Permeation in the Separation of Benzene/ Cyclohexane Mixtures through Styrene-Butadiene Rubber Crosslinked Membranes
Authors: Hassiba Benguergoura, Kamal Chanane, Sâad Moulay
Abstract:
Pervaporation (PV), a membrane-based separation technology, has gained much attention because of its energy saving capability and low-cost, especially for separation of azeotropic or close-boiling liquid mixtures. There are two crucial issues for industrial application of pervaporation process. The first is developing membrane material and tailoring membrane structure to obtain high pervaporation performances. The second is modeling pervaporation transport to better understand of the above-mentioned structure–pervaporation relationship. Many models were proposed to predict the mass transfer process, among them, solution-diffusion model is most widely used in describing pervaporation transport including preferential sorption, diffusion and evaporation steps. For modeling pervaporation transport, the permeation flux, which depends on the solubility and diffusivity of components in the membrane, should be obtained first. Traditionally, the solubility was calculated according to the Flory–Huggins theory. Separation of the benzene (Bz)/cyclohexane (Cx) mixture is industrially significant. Numerous papers have been focused on the Bz/Cx system to assess the PV properties of membrane materials. Membranes with both high permeability and selectivity are desirable for practical application. Several new polymers have been prepared to get both high permeability and selectivity. Styrene-butadiene rubbers (SBR), dense membranes cross-linked by chloromethylation were used in the separation of benzene/cyclohexane mixtures. The impact of chloromethylation reaction as a new method of cross-linking SBR on the pervaporation performance have been reported. In contrast to the vulcanization with sulfur, the cross-linking takes places on styrene units of polymeric chains via a methylene bridge. The partial pervaporative (PV) fluxes of benzene/cyclohexane mixtures in styrene-butadiene rubber (SBR) were predicted using Fick's first law. The predicted partial fluxes and the PV separation factor agreed well with the experimental data by integrating Fick's law over the benzene concentration. The effects of feed concentration and operating temperature on the predicted permeation flux by this proposed model are investigated. The predicted permeation fluxes are in good agreement with experimental data at lower benzene concentration in feed, but at higher benzene concentration, the model overestimated permeation flux. The predicted and experimental permeation fluxes all increase with operating temperature increasing. Solvent sorption levels for benzene/ cyclohexane mixtures in a SBR membrane were determined experimentally. The results showed that the solvent sorption levels were strongly affected by the feed composition. The Flory- Huggins equation generates higher R-square coefficient for the sorption selectivity.Keywords: benzene, cyclohexane, pervaporation, permeation, sorption modeling, SBR
Procedia PDF Downloads 327199 Investigation of Permeate Flux through DCMD Module by Inserting S-Ribs Carbon-Fiber Promoters with Ascending and Descending Hydraulic Diameters
Authors: Chii-Dong Ho, Jian-Har Chen
Abstract:
The decline in permeate flux across membrane modules is attributed to the increase in temperature polarization resistance in flat-plate Direct Contact Membrane Distillation (DCMD) modules for pure water productivity. Researchers have discovered that this effect can be diminished by embedding turbulence promoters, which augment turbulence intensity at the cost of increased power consumption, thereby improving vapor permeate flux. The device performance of DCMD modules for permeate flux was further enhanced by shrinking the hydraulic diameters of inserted S-ribs carbon-fiber promoters as well as considering the energy consumption increment. The mass-balance formulation, based on the resistance-in-series model by energy conservation in one-dimensional governing equations, was developed theoretically and conducted experimentally on a flat-plate polytetrafluoroethylene/polypropylene (PTFE/PP) membrane module to predict permeate flux and temperature distributions. The ratio of permeate flux enhancement to energy consumption increment, as referred to an assessment on economic viewpoint and technical feasibilities, was calculated to determine the suitable design parameters for DCMD operations with the insertion of S-ribs carbon-fiber turbulence promoters. An economic analysis was also performed, weighing both permeate flux improvement and energy consumption increment on modules with promoter-filled channels by different array configurations and various hydraulic diameters of turbulence promoters. Results showed that the ratio of permeate flux improvement to energy consumption increment in descending hydraulic-diameter modules is higher than in uniform hydraulic-diameter modules. The fabrication details of the DCMD module filaments implementing the S-ribs carbon-fiber filaments and the schematic configuration of the flat-plate DCMD experimental setup with presenting acrylic plates as external walls were demonstrated in the present study. The S-ribs carbon fibers perform as turbulence promoters incorporated into the artificial hot saline feed stream, which was prepared by adding inorganic salts (NaCl) to distilled water. Theoretical predictions and experimental results exhibited a great accomplishment to considerably achieve permeate flux enhancement, such as the new design of the DCMD module with inserting S-ribs carbon-fiber promoters. Additionally, the Nusselt number for the water vapor transferring membrane module with inserted S-ribs carbon-fiber promoters was generalized into a simplified expression to predict the heat transfer coefficient and permeate flux as well.Keywords: permeate flux, Nusselt number, DCMD module, temperature polarization, hydraulic diameters
Procedia PDF Downloads 9198 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites
Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek
Abstract:
Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding
Procedia PDF Downloads 209197 Translating Creativity to an Educational Context: A Method to Augment the Professional Training of Newly Qualified Secondary School Teachers
Authors: Julianne Mullen-Williams
Abstract:
This paper will provide an overview of a three year mixed methods research project that explores if methods from the supervision of dramatherapy can augment the occupational psychology of newly qualified secondary school teachers. It will consider how creativity and the use of metaphor, as applied in the supervision of dramatherapists, can be translated to an educational context in order to explore the explicit / implicit dynamics between the teacher trainee/ newly qualified teacher and the organisation in order to support the super objective in training for teaching; how to ‘be a teacher.’ There is growing evidence that attrition rates among teachers are rising after only five years of service owing to too many national initiatives, an unmanageable curriculum and deteriorating student discipline. The fieldwork conducted entailed facilitating a reflective space for Newly Qualified Teachers from all subject areas, using methods from the supervision of dramatherapy, to explore the social and emotional aspects of teaching and learning with the ultimate aim of improving the occupational psychology of teachers. Clinical supervision is a formal process of professional support and learning which permits individual practitioners in frontline service jobs; counsellors, psychologists, dramatherapists, social workers and nurses to expand their knowledge and proficiency, take responsibility for their own practice, and improve client protection and safety of care in complex clinical situations. It is deemed integral to continued professional practice to safeguard vulnerable people and to reduce practitioner burnout. Dramatherapy supervision incorporates all of the above but utilises creative methods as a tool to gain insight and a deeper understanding of the situation. Creativity and the use of metaphor enable the supervisee to gain an aerial view of the situation they are exploring. The word metaphor in Greek means to ‘carry across’ indicating a transfer of meaning form one frame of reference to another. The supervision support was incorporated into each group’s induction training programme. The first year group attended fortnightly one hour sessions, the second group received two one hour sessions every term. The existing literature on the supervision and mentoring of secondary school teacher trainees calls for changes in pre-service teacher education and in the induction period. There is a particular emphasis on the need to include reflective and experiential learning, within training programmes and within the induction period, in order to help teachers manage the interpersonal dynamics and emotional impact within a high pressurised environmentKeywords: dramatherapy supervision, newly qualified secondary school teachers, professional development, teacher education
Procedia PDF Downloads 388