Search results for: dynamic training shoes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7468

Search results for: dynamic training shoes

328 Environmental Impact of Pallets in the Supply Chain: Including Logistics and Material Durability in a Life Cycle Assessment Approach

Authors: Joana Almeida, Kendall Reid, Jonas Bengtsson

Abstract:

Pallets are devices that are used for moving and storing freight and are nearly omnipresent in supply chains. The market is dominated by timber pallets, with plastic being a common alternative. Either option underpins the use of important resources (oil, land, timber), the emission of greenhouse gases and additional waste generation in most supply chains. This study uses a dynamic approach to the life cycle assessment (LCA) of pallets. It demonstrates that what ultimately defines the environmental burden of pallets in the supply chain is how often the length of its lifespan, which depends on the durability of the material and on how pallets are utilized. This study proposes a life cycle assessment (LCA) of pallets in supply chains supported by an algorithm that estimates pallet durability in function of material resilience and of logistics. The LCA runs from cradle-to-grave, including raw material provision, manufacture, transport and end of life. The scope is representative of timber and plastic pallets in the Australian and South-East Asia markets. The materials included in this analysis are: -tropical mixed hardwood, unsustainably harvested in SE Asia; -certified softwood, sustainably harvested; -conventional plastic, a mix of virgin and scrap plastic; -recycled plastic pallets, 100% mixed plastic scrap, which are being pioneered by Re > Pal. The logistical model purports that more complex supply chains and rougher handling subject pallets to higher stress loads. More stress shortens the lifespan of pallets in function of their composition. Timber pallets can be repaired, extending their lifespan, while plastic pallets cannot. At the factory gate, softwood pallets have the lowest carbon footprint. Re > pal follows closely due to its burden-free feedstock. Tropical mixed hardwood and plastic pallets have the highest footprints. Harvesting tropical mixed hardwood in SE Asia often leads to deforestation, leading to emissions from land use change. The higher footprint of plastic pallets is due to the production of virgin plastic. Our findings show that manufacture alone does not determine the sustainability of pallets. Even though certified softwood pallets have lower carbon footprint and their lifespan can be extended by repair, the need for re-supply of materials and disposal of waste timber offsets this advantage. It also leads to most waste being generated among all pallets. In a supply chain context, Re > Pal pallets have the lowest footprint due to lower replacement and disposal needs. In addition, Re > Pal are nearly ‘waste neutral’, because the waste that is generated throughout their life cycle is almost totally offset by the scrap uptake for production. The absolute results of this study can be confirmed by progressing the logistics model, improving data quality, expanding the range of materials and utilization practices. Still, this LCA demonstrates that considering logistics, raw materials and material durability is central for sustainable decision-making on pallet purchasing, management and disposal.

Keywords: carbon footprint, life cycle assessment, recycled plastic, waste

Procedia PDF Downloads 196
327 Interplay of Material and Cycle Design in a Vacuum-Temperature Swing Adsorption Process for Biogas Upgrading

Authors: Federico Capra, Emanuele Martelli, Matteo Gazzani, Marco Mazzotti, Maurizio Notaro

Abstract:

Natural gas is a major energy source in the current global economy, contributing to roughly 21% of the total primary energy consumption. Production of natural gas starting from renewable energy sources is key to limit the related CO2 emissions, especially for those sectors that heavily rely on natural gas use. In this context, biomethane produced via biogas upgrading represents a good candidate for partial substitution of fossil natural gas. The upgrading process of biogas to biomethane consists in (i) the removal of pollutants and impurities (e.g. H2S, siloxanes, ammonia, water), and (ii) the separation of carbon dioxide from methane. Focusing on the CO2 removal process, several technologies can be considered: chemical or physical absorption with solvents (e.g. water, amines), membranes, adsorption-based systems (PSA). However, none emerged as the leading technology, because of (i) the heterogeneity in plant size, ii) the heterogeneity in biogas composition, which is strongly related to the feedstock type (animal manure, sewage treatment, landfill products), (iii) the case-sensitive optimal tradeoff between purity and recovery of biomethane, and iv) the destination of the produced biomethane (grid injection, CHP applications, transportation sector). With this contribution, we explore the use of a technology for biogas upgrading and we compare the resulting performance with benchmark technologies. The proposed technology makes use of a chemical sorbent, which is engineered by RSE and consists of Di-Ethanol-Amine deposited on a solid support made of γ-Alumina, to chemically adsorb the CO2 contained in the gas. The material is packed into fixed beds that cyclically undergo adsorption and regeneration steps. CO2 is adsorbed at low temperature and ambient pressure (or slightly above) while the regeneration is carried out by pulling vacuum and increasing the temperature of the bed (vacuum-temperature swing adsorption - VTSA). Dynamic adsorption tests were performed by RSE and were used to tune the mathematical model of the process, including material and transport parameters (i.e. Langmuir isotherms data and heat and mass transport). Based on this set of data, an optimal VTSA cycle was designed. The results enabled a better understanding of the interplay between material and cycle tuning. As exemplary application, the upgrading of biogas for grid injection, produced by an anaerobic digester (60-70% CO2, 30-40% CH4), for an equivalent size of 1 MWel was selected. A plant configuration is proposed to maximize heat recovery and minimize the energy consumption of the process. The resulting performances are very promising compared to benchmark solutions, which make the VTSA configuration a valuable alternative for biomethane production starting from biogas.

Keywords: biogas upgrading, biogas upgrading energetic cost, CO2 adsorption, VTSA process modelling

Procedia PDF Downloads 247
326 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 97
325 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 90
324 Changes in Rainfall and Temperature and Its Impact on Crop Production in Moyamba District, Southern Sierra Leone

Authors: Keiwoma Mark Yila, Mathew Lamrana Siaffa Gboku, Mohamed Sahr Lebbie, Lamin Ibrahim Kamara

Abstract:

Rainfall and temperature are the important variables which are often used to trace climate variability and change. A perception study and analysis of climatic data were conducted to assess the changes in rainfall and temperature and their impact on crop production in Moyamba district, Sierra Leone. For the perception study, 400 farmers were randomly selected from farmer-based organizations (FBOs) in 4 chiefdoms, and 30 agricultural extension workers (AWEs) in the Moyamba district were purposely selected as respondents. Descriptive statistics and Kendall’s test of concordance was used to analyze the data collected from the farmers and AEWs. Data for the analysis of variability and trends of rainfall and temperature from 1991 to 2020 were obtained from the Sierra Leone Meteorological Agency and Njala University and grouped into monthly, seasonal and annual time series. Regression analysis was used to determine the statistical values and trend lines for the seasonal and annual time series data. The Mann-Kendall test and Sen’s Slope Estimator were used to analyze the trends' significance and magnitude, respectively. The results of both studies show evidence of climate change in the Moyamba district. A substantial number of farmers and AEWs perceived a decrease in the annual rainfall amount, length of the rainy season, a late start and end of the rainy season, an increase in the temperature during the day and night, and a shortened harmattan period over the last 30 years. Analysis of the meteorological data shows evidence of variability in the seasonal and annual distribution of rainfall and temperature, a decreasing and non-significant trend in the rainy season and annual rainfall, and an increasing and significant trend in seasonal and annual temperature from 1991 to 2020. However, the observed changes in rainfall and temperature by the farmers and AEWs partially agree with the results of the analyzed meteorological data. The majority of the farmers perceived that; adverse weather conditions have negatively affected crop production in the district. Droughts, high temperatures, and irregular rainfall are the three major adverse weather events that farmers perceived to have contributed to a substantial loss in the yields of the major crops cultivated in the district. In response to the negative effects of adverse weather events, a substantial number of farmers take no action due to their lack of knowledge and technical or financial capacity to implement climate-sensitive agricultural (CSA) practices. Even though few farmers are practising some CSA practices in their farms, there is an urgent need to build the capacity of farmers and AEWs to adapt to and mitigate the negative impacts of climate change. The most priority support needed by farmers is the provision of climate-resilient crop varieties, whilst the AEWs need training on CSA practices.

Keywords: climate change, crop productivity, farmer’s perception, rainfall, temperature, Sierra Leone

Procedia PDF Downloads 53
323 LaeA/1-Velvet Interplay in Aspergillus and Trichoderma: Regulation of Secondary Metabolites and Cellulases

Authors: Razieh Karimi Aghcheh, Christian Kubicek, Joseph Strauss, Gerhard Braus

Abstract:

Filamentous fungi are of considerable economic and social significance for human health, nutrition and in white biotechnology. These organisms are dominant producers of a range of primary metabolites such as citric acid, microbial lipids (biodiesel) and higher unsaturated fatty acids (HUFAs). In particular, they produce also important but structurally complex secondary metabolites with enormous therapeutic applications in pharmaceutical industry, for example: cephalosporin, penicillin, taxol, zeranol and ergot alkaloids. Several fungal secondary metabolites, which are significantly relevant to human health do not only include antibiotics, but also e.g. lovastatin, a well-known antihypercholesterolemic agent produced by Aspergillus. terreus, or aflatoxin, a carcinogen produced by A. flavus. In addition to their roles for human health and agriculture, some fungi are industrially and commercially important: Species of the ascomycete genus Hypocrea spp. (teleomorph of Trichoderma) have been demonstrated as efficient producer of highly active cellulolytic enzymes. This trait makes them effective in disrupting and depolymerization of lignocellulosic materials and thus applicable tools in number of biotechnological areas as diverse as clothes-washing detergent, animal feed, and pulp and fuel productions. Fungal LaeA/LAE1 (Loss of aflR Expression A) homologs their gene products act at the interphase between secondary metabolisms, cellulase production and development. Lack of the corresponding genes results in significant physiological changes including loss of secondary metabolite and lignocellulose degrading enzymes production. At the molecular level, the encoded proteins are presumably methyltransferases or demethylases which act directly or indirectly at heterochromatin and interact with velvet domain proteins. Velvet proteins bind to DNA and affect expression of secondary metabolites (SMs) genes and cellulases. The dynamic interplay between LaeA/LAE1, velvet proteins and additional interaction partners is the key for an understanding of the coordination of metabolic and morphological functions of fungi and is required for a biotechnological control of the formation of desired bioactive products. Aspergilli and Trichoderma represent different biotechnologically significant species with significant differences in the LaeA/LAE1-Velvet protein machinery and their target proteins. We, therefore, performed a comparative study of the interaction partners of this machinery and the dynamics of the various protein-protein interactions using our robust proteomic and mass spectrometry techniques. This enhances our knowledge about the fungal coordination of secondary metabolism, cellulase production and development and thereby will certainly improve recombinant fungal strain construction for the production of industrial secondary metabolite or lignocellulose hydrolytic enzymes.

Keywords: cellulases, LaeA/1, proteomics, secondary metabolites

Procedia PDF Downloads 248
322 Effectiveness of Prehabilitation on Improving Emotional and Clinical Recovery of Patients Undergoing Open Heart Surgeries

Authors: Fatma Ahmed, Heba Mostafa, Bassem Ramdan, Azza El-Soussi

Abstract:

Background: World Health Organization stated that by 2020 cardiac disease will be the number one cause of death worldwide and estimates that 25 million people per year will suffer from heart disease. Cardiac surgery is considered an effective treatment for severe forms of cardiovascular diseases that cannot be treated by medical treatment or cardiac interventions. In spite of the benefits of cardiac surgery, it is considered a major stressful experience for patients who are candidate for surgery. Prehabilitation can decrease incidences of postoperative complications as it prepares patients for surgical stress through enhancing their defenses to meet the demands of surgery. When patients anticipate the postoperative sequence of events, they will prepare themselves to act certain behaviors, identify their roles and actively participate in their own recovery, therefore, anxiety levels are decreased and functional capacity is enhanced. Prehabilitation programs can comprise interventions that include physical exercise, psychological prehabilitation, nutritional optimization and risk factor modification. Physical exercises are associated with improvements in the functioning of the various physiological systems, reflected in increased functional capacity, improved cardiac and respiratory functions and make patients fit for surgical intervention. Prehabilitation programs should also prepare patients psychologically in order to cope with stress, anxiety and depression associated with postoperative pain, fatigue, limited ability to perform the usual activities of daily living through acting in a healthy manner. Notwithstanding the benefits of psychological preparations, there are limited studies which investigated the effect of psychological prehabilitation to confirm its effect on psychological, quality of life and physiological outcomes of patients who had undergone cardiac surgery. Aim of the study: The study aims to determine the effect of prehabilitation interventions on outcomes of patients undergoing cardiac surgeries. Methods: Quasi experimental study design was used to conduct this study. Sixty eligible and consenting patients were recruited and divided into two groups: control and intervention group (30 participants in each). One tool namely emotional, physiological, clinical, cognitive and functional capacity outcomes of prehabilitation intervention assessment tool was utilized to collect the data of this study. Results: Data analysis showed significant improvement in patients' emotional state, physiological and clinical outcomes (P < 0.000) with the use of prehabilitation interventions. Conclusions: Cardiac prehabilitation in the form of providing information about surgery, circulation exercise, deep breathing exercise, incentive spirometer training and nutritional education implemented daily by patients scheduled for elective open heart surgery one week before surgery have been shown to improve patients' emotional state, physiological and clinical outcomes.

Keywords: emotional recovery, clinical recovery, coronary artery bypass grafting patients, prehabilitation

Procedia PDF Downloads 182
321 L1 Poetry and Moral Tales as a Factor Affecting L2 Acquisition in EFL Settings

Authors: Arif Ahmed Mohammed Al-Ahdal

Abstract:

Poetry, tales, and fables have always been a part of the L1 repertoire and one that takes the learners to another amazing and fascinating world of imagination. The storytelling class and the genre of poems are activities greatly enjoyed by all age groups. The very significant idea behind their inclusion in the language curriculum is to sensitize young minds to a wide range of human emotions that are believed to greatly contribute to building their social resilience, emotional stability, empathy towards fellow creatures, and literacy. Quite certainly, the learning objective at this stage is not language acquisition (though it happens as an automatic process) but getting the young learners to be acquainted with an entire spectrum of what may be called the ‘noble’ abilities of the human race. They enrich their very existence, inspiring them to unearth ‘selves’ that help them as adults and enable them to co-exist fruitfully and symbiotically with their fellow human beings. By extension, ‘higher’ training in these literature genres shows the universality of human emotions, sufferings, aspirations, and hopes. The current study is anchored on the Reader-Response-Theory in literature learning, which suggests that the reader reconstructs work and re-enacts the author's creative role. Reiteratingly, literary works provide clues or verbal symbols in a linguistic system, widely accepted by everyone who shares the language, but everyone reads their own life experiences and situations into them. The significance of words depends on the reader, even if they have a typical relationship. In every reading, there is an interaction between the reader and the text. The process of reading is an experience in which the reader tries to comprehend the literary work, which surpasses its full potential since it provides emotional and intellectual reactions that are not anticipated from the document but cannot be affirmed just by the reader as a part of the text. The idea is that the text forms the basis of a unifying experience. A reinterpretation of the literary text may transform it into a guiding principle to respond to actual experiences and personal memories. The impulses delivered to the reader vary according to poetry or texts; nevertheless, the readers differ considerably even with the same material. Previous studies confirm that poetry is a useful tool for learning a language. This present paper works on these hypotheses and proposes to study the impetus given to L2 learning as a factor of exposure to poetry and meaningful stories in L1. The driving force behind the choice of this topic is the first-hand experience that the researcher had while teaching a literary text to a group of BA students who, as a reaction to the text, initially burst into tears and ultimately turned the class into an interactive session. The study also intends to compare the performance of male and female students post intervention using pre and post-tests, apart from undertaking a detailed inquiry via interviews with college learners of English to understand how L1 literature plays a great role in the acquisition of L2.

Keywords: SLA, literary text, poetry, tales, affective factors

Procedia PDF Downloads 56
320 Positive Incentives to Reduce Private Car Use: A Theory-Based Critical Analysis

Authors: Rafael Alexandre Dos Reis

Abstract:

Research has shown a substantial increase in the participation of Conventionally Fuelled Vehicles (CFVs) in the urban transport modal split. The reasons for this unsustainable reality are multiple, from economic interventions to individual behaviour. The development and delivery of positive incentives for the adoption of more environmental-friendly modes of transport is an emerging strategy to help in tackling the problem of excessive use of conventionally fuelled vehicles. The efficiency of this approach, like other information-based schemes, can benefit from the knowledge of their potential impacts in theoretical constructs of multiple behaviour change theories. The goal of this research is to critically analyse theories of behaviour that are relevant to transport research and the impacts of positive incentives on the theoretical determinants of behaviour, strengthening the current body of evidence about the benefits of this approach. The main method to investigate this will involve a literature review on two main topics: the current theories of behaviour that have empirical support in transport research and the past or ongoing positive incentives programs that had an impact on car use reduction. The reviewed programs of positive incentives were the following: The TravelSmart®; Spitsmijden®; Incentives for Singapore Commuters® (INSINC); COMMUTEGREENER®; MOVESMARTER®; STREETLIFE®; SUPERHUB®; SUNSET® and the EMPOWER® project. The theories analysed were the heory of Planned Behaviour (TPB); The Norm Activation Theory (NAM); Social Learning Theory (SLT); The Theory of Interpersonal Behaviour (TIB); The Goal-Setting Theory (GST) and The Value-Belief-Norm Theory (VBN). After the revisions of the theoretical constructs of each of the theories and their influence on car use, it can be concluded that positive incentives schemes impact on behaviour change in the following manners: -Changing individual’s attitudes through informational incentives; -Increasing feelings of moral obligations to reduce the use of CFVs; -Increase the perceived social pressure to engage in more sustainable mobility behaviours through the use of comparison mechanisms in social media, for example; -Increase the perceived control of behaviour through informational incentives and training incentives; -Increasing personal norms with reinforcing information; -Providing tools for self-monitoring and self-evaluation; -Providing real experiences in alternative modes to the car; -Making the observation of others’ car use reduction possible; -Informing about consequences of behaviour and emphasizing the individual’s responsibility with society and the environment; -Increasing the perception of the consequences of car use to an individual’s valued objects; -Increasing the perceived ability to reduce threats to environment; -Help establishing goals to reduce car use; - iving personalized feedback on the goal; -Increase feelings of commitment to the goal; -Reducing the perceived complexity of the use of alternatives to the car. It is notable that the emerging technique of delivering positive incentives are systematically connected to causal determinants of travel behaviour. The preliminary results of the reviewed programs evidence how positive incentives might strengthen these determinants and help in the process of behaviour change.

Keywords: positive incentives, private car use reduction, sustainable behaviour, voluntary travel behaviour change

Procedia PDF Downloads 317
319 Teaching English for Children in Public Schools Can Work in Egypt

Authors: Shereen Kamel

Abstract:

This study explores the recent application of bilingual education in Egyptian public schools. It aims to provide an overall picture of bilingual education programs globally and examine its adequacy to the Egyptian social and cultural context. The study also assesses the current application process of teaching English as a Second Language in public schools from the early childhood education stage and onwards, instead of starting it from middle school; as a strategy that promotes English language proficiency and equity among students. The theoretical framework is based on Jim Cummins’ bilingual education theories and on recent trends adopting different developmental theories and perspectives, like Stephen Crashen’s theory of Second Language Acquisition that calls for communicative and meaningful interaction rather than memorization of grammatical rules. The question posed here is whether bilingual education, with its peculiar nature, could be a good chance to reach out to all Egyptian students and prepare them to become global citizens. In addition to this, a more specific question is related to the extent to which social and cultural variables can affect the young learners’ second language acquisition. This exploratory analytical study uses mixed-methods research design to examine the application of bilingual education in Egyptian public schools. The study uses a cluster sample of schools in Egypt from different social and cultural backgrounds to assess the determining variables. The qualitative emphasis is on interviewing teachers and reviewing students’ achievement documents. The quantitative aspect is based on observations of in-class activities through tally sheets and checklists. Having access to schools and documents is authorized by governmental and institutional research bodies. Data sources will comprise achievement records, students’ portfolios, parents’ feedback and teachers’ viewpoints. Triangulation and SPSS will be used for analysis. Based on the gathered data, new curricula have been assigned for elementary grades and teachers have been required to teach the newly developed materials all of a sudden without any prior training. Due to shortage in the teaching force, many assigned teachers have not been proficient in the English language. Hence, teachers’ incompetency and unpreparedness to teach this grade specific curriculum constitute a great challenge in the implementation phase. Nevertheless, the young learners themselves as well as their parents seem to be enthusiastic about the idea itself. According to the findings of this research study, teaching English as a Second Language to children in public schools can be applicable and is culturally relevant to the Egyptian context. However, there might be some social and cultural differences and constraints when it comes to application in addition to various aspects regarding teacher preparation. Therefore, a new mechanism should be incorporated to overcome these challenges for better results. Moreover, a new paradigm shift in these teacher development programs is direly needed. Furthermore, ongoing support and follow up are crucial to help both teachers and students realize the desired outcomes.

Keywords: bilingual education, communicative approach, early childhood education, language and culture, second language acquisition

Procedia PDF Downloads 99
318 The Lacuna in Understanding of Forensic Science amongst Law Practitioners in India

Authors: Poulomi Bhadra, Manjushree Palit, Sanjeev P. Sahni

Abstract:

Forensic science uses all branches of science for criminal investigation and trial and has increasingly emerged as an important tool in the administration of justice. However, the growth and development of this field in India has not been as rapid or widespread as compared to the more developed Western countries. For successful administration of justice, it is important that all agencies involved in law enforcement adopt an inter-professional approach towards forensic science, which is presently lacking. In light of the alarmingly high average acquittal rate in India, this study aims to examine the lack of understanding and appreciation of the importance and scope of forensic evidence and expert opinions amongst law professionals such as lawyers and judges. Based on a study of trial court cases from Delhi and surrounding areas, the study underline the areas in forensics where the criminal justice system has noticeably erred. Using this information, the authors examine the extent of forensic understanding amongst legal professionals and attempt to conclusively identify the areas in which they need further appraisal. A cross-sectional study done using a structured questionnaire was conducted amongst law professionals across age, gender, type and years of experience in court, to determine their understanding of DNA, fingerprints and other interdisciplinary scientific materials used as forensic evidence. In our study, we understand the levels of understanding amongst lawyers with regards to DNA and fingerprint evidence, and how it affects trial outcomes. We also aim to understand the factors that prevent credible and advanced awareness amongst legal personnel, amongst others. The survey identified the areas in modern and advanced forensics, such as forensic entomology, anthropology, cybercrime etc., in which Indian legal professionals are yet to attain a functional understanding. It also brings to light, what is commonly termed as the ‘CSI-effect’ in the Western courtrooms, and provides scope to study the existence of this phenomenon and its effects on the Indian courts and their judgements. This study highlighted the prevalence of unchallenged expert testimony presented by the prosecution in criminal trials and impressed upon the judicial system the need for independent analysis and evaluation of the scientist’s data and/or testimony by the defense. Overall, this study aims to define a clearer and rigid understanding of why legal professionals should have basic understanding of the interdisciplinary nature of forensic sciences. Based on the aforementioned findings, the author suggests various measures by which judges and lawyers might obtain an extensive knowledge of the advances and promising potentialities of forensic science. This includes promoting a forensic curriculum in legal studies at Bachelor’s and Master’s level as well as in mid-career professional courses. Formation of forensic-legal consultancies, in consultation with the Department of Justice, will not only assist in training police, military and law personnel but will also encourage legal research in this field. These suggestions also aim to bridge the communication gap that presently exists between law practitioners, forensic scientists and the general community’s awareness of the criminal justice system.

Keywords: forensic science, Indian legal professionals, interdisciplinary awareness, legal education

Procedia PDF Downloads 319
317 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival

Authors: Ichiro Takahashi

Abstract:

One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.

Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability

Procedia PDF Downloads 137
316 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 169
315 The Politics of Identity: A Longitudinal Study of the Sociopolitical Development of Education Leaders

Authors: Shelley Zion

Abstract:

This study examines the longitudinal impact (10 years) of a course for education leaders designed to encourage the development of critical consciousness surrounding issues of equity, oppression, power, and privilege. The ability to resist and challenge oppression across social and cultural contexts can be acquired through the use of transformative pedagogies that create spaces that use the practice of exploration to make connections between pervasive structural and institutional practices and race and ethnicity. This study seeks to extend this understanding by exploring the longitudinal influence of participating in a course that utilizes transformative pedagogies, course materials, exercises, and activities to encourage the practice of exploration of student experiences with racial and ethnic discrimination with the end goal of providing them with the necessary knowledge and skills that foster their ability to resist and challenge oppression and discrimination -critical action- in their lives. To this end, we use the explanatory power of the theories of critical consciousness development, sociopolitical development, and social identity construction that view exploration as a crucial practice in understanding the role ethnic and racial differences play in creating opportunities or barriers in the lives of individuals. When educators use transformative pedagogies, they create a space where students collectively explore their experiences with racial and ethnic discrimination through course readings, in-class activities, and discussions. The end goal of this exploration is twofold: first, to encourage the student’s ability to understand how differences are identified, given meaning to, and used to position them in specific places and spaces in their world; second, to scaffold students’ ability to make connections between their individual and collective differences and particular institutional and structural practices that create opportunities or barriers in their lives. Studies have found the formal exploration of students’ individual and collective differences in relation to their experiences with racial and ethnic discrimination results in developing an understanding of the roles race and ethnicity play in their lives. To trace the role played by exploration in identity construction, we utilize an integrative approach to identity construction informed by multiple theoretical frameworks grounded in cultural studies, social psychology, and sociology that understand social-cultural, racial, and ethnic -identities as dynamic and ever-changing based on context-specific environments. Stuart Hall refers to this practice as taking “symbolic detours through the past” while reflecting on the different ways individuals have been positioned based on their roots (group membership) and also how they, in turn, chose to position themselves through collective sense-making of the various meanings their differences carried through the routes they have taken. The practice of exploration in the construction of ethnic-racial identities has been found to be beneficial to sociopolitical development.

Keywords: political polarization, civic participation, democracy, education

Procedia PDF Downloads 28
314 Structural and Microstructural Analysis of White Etching Layer Formation by Electrical Arcing Induced on the Surface of Rail Track

Authors: Ali Ahmed Ali Al-Juboori, H. Zhu, D. Wexler, H. Li, C. Lu, J. McLeod, S. Pannila, J. Barnes

Abstract:

A number of studies have focused on the formation mechanics of white etching layer and its origin in the railway operation. Until recently, the following hypotheses consider the precise mechanics of WELs formation: (i) WELs are the result of thermal process caused by wheel slip; (ii) WELs are mechanically induced by severe plastic deformation; (iii) WELs are caused by a combination of thermo-mechanical process. The mechanisms discussed above lead to occurrence of white etching layers on the area of wheel and rail contact. This is because the contact patch which is the active point of the wheel on the rail is exposed to highest shear stresses which result in localised severe plastic deformation; and highest rate of heat caused by wheel slipe during excessive traction or braking effort. However, if the WELs are not on the running band area, it would suggest that there is another cause of WELs formation. In railway system, particularly electrified railway, arcing phenomenon has been occurring more often and regularly on the rails. In electrified railway, the current is delivered to the train traction motor via contact wires and then returned to the station via the contact between the wheel and the rail. If the contact between the wheel and the rail is temporarily losing, due to dynamic vibration, entrapped dirt or water, lubricant effect or oxidation occurrences, high current can jump through the gap and results in arcing. The other resources of arcing also include the wheel passage the insulated joint and lightning on a train during bad weather. During the arcing, an extensive heat is generated and speared over a large area of top surface of rail. Thus, arcing is considered another heat source in the rail head (rather than wheel slipe) that results in microstructural changes and white etching layer formation. A head hardened (HH) rail steel, cut from a curved rail truck was used for the investigation. Samples were sectioned from a depth of 10 mm below the rail surface, where the material is considered to be still within the hardened layer but away from any microstructural changes on the top surface layer caused by train passage. These samples were subjected to electrical discharges by using Gas Tungsten Arc Welding (GTAW) machine. The arc current was controlled and moved along the samples surface in the direction of travel, as indicated by an arrow. Five different conditions were applied on the surface of the samples. Samples containing pre-existed WELs, taken from ex-service rail surface, were also considered in this study for comparison. Both simulated and ex-serviced WELs were characterised by advanced methods including SEM, TEM, TKD, EDS, XRD. Samples for TEM and TKFD were prepared by Focused Ion Beam (FIB) milling. The results showed that both simulated WELs by electrical arcing and ex-service WEL comprise similar microstructure. Brown etching layer was found with WELs and likely induced by a concurrent tempering process. This study provided a clear understanding of new formation mechanics of WELs which contributes to track maintenance procedure.

Keywords: white etching layer, arcing, brown etching layer, material characterisation

Procedia PDF Downloads 99
313 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth

Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.

Keywords: treeline, dynamic, climate, modeling

Procedia PDF Downloads 36
312 Analysis of Electric Mobility in the European Union: Forecasting 2035

Authors: Domenico Carmelo Mongelli

Abstract:

The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.

Keywords: engines, Europe, mobility, transition

Procedia PDF Downloads 38
311 Soft Pneumatic Actuators Fabricated Using Soluble Polymer Inserts and a Single-Pour System for Improved Durability

Authors: Alexander Harrison Greer, Edward King, Elijah Lee, Safa Obuz, Ruhao Sun, Aditya Sardesai, Toby Ma, Daniel Chow, Bryce Broadus, Calvin Costner, Troy Barnes, Biagio DeSimone, Yeshwin Sankuratri, Yiheng Chen, Holly Golecki

Abstract:

Although a relatively new field, soft robotics is experiencing a rise in applicability in the secondary school setting through The Soft Robotics Toolkit, shared fabrication resources and a design competition. Exposing students outside of university research groups to this rapidly growing field allows for development of the soft robotics industry in new and imaginative ways. Soft robotic actuators have remained difficult to implement in classrooms because of their relative cost or difficulty of fabrication. Traditionally, a two-part molding system is used; however, this configuration often results in delamination. In an effort to make soft robotics more accessible to young students, we aim to develop a simple, single-mold method of fabricating soft robotic actuators from common household materials. These actuators are made by embedding a soluble polymer insert into silicone. These inserts can be made from hand-cut polystyrene, 3D-printed polyvinyl alcohol (PVA) or acrylonitrile butadiene styrene (ABS), or molded sugar. The insert is then dissolved using an appropriate solvent such as water or acetone, leaving behind a negative form which can be pneumatically actuated. The resulting actuators are seamless, eliminating the instability of adhering multiple layers together. The benefit of this approach is twofold: it simplifies the process of creating a soft robotic actuator, and in turn, increases its effectiveness and durability. To quantify the increased durability of the single-mold actuator, it was tested against the traditional two-part mold. The single-mold actuator could withstand actuation at 20psi for 20 times the duration when compared to the traditional method. The ease of fabrication of these actuators makes them more accessible to hobbyists and students in classrooms. After developing these actuators, they were applied, in collaboration with a ceramics teacher at our school, to a glove used to transfer nuanced hand motions used to throw pottery from an expert artist to a novice. We quantified the improvement in the users’ pottery-making skill when wearing the glove using image analysis software. The seamless actuators proved to be robust in this dynamic environment. Seamless soft robotic actuators created by high school students show the applicability of the Soft Robotics Toolkit for secondary STEM education and outreach. Making students aware of what is possible through projects like this will inspire the next generation of innovators in materials science and robotics.

Keywords: pneumatic actuator fabrication, soft robotic glove, soluble polymers, STEM outreach

Procedia PDF Downloads 103
310 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 122
309 Introduction of Acute Paediatric Services in Primary Care: Evaluating the Impact on GP Education

Authors: Salman Imran, Chris Healey

Abstract:

Traditionally, medical care of children in England and Wales starts from primary care with a referral to secondary care paediatricians who may not investigate further. Many primary care doctors do not undergo a paediatric rotation/exposure in training. As a result, there are many who have not acquired the necessary skills to manage children hence increasing hospital referral. With the current demand on hospitals in the National Health Service managing more problems in the community is needed. One way of handling this is to set up clinics, meetings and huddles in GP surgeries where professionals involved (general practitioner, paediatrician, health visitor, community nurse, dietician, school nurse) come together and share information which can help improve communication and care. The increased awareness and education that paediatricians can impart in this way will help boost confidence for primary care professionals to be able to be more self-sufficient. This has been tried successfully in other regions e.g., St. Mary’s Hospital in London but is crucial for a more rural setting like ours. The primary aim of this project would be to educate specifically GP’s and generally all other health professionals involved. Additional benefits would be providing care nearer home, increasing patient’s confidence in their local surgery, improving communication and reducing unnecessary patient flow to already stretched hospital resources. Methods: This was done as a plan do study act cycle (PDSA). Three clinics were delivered in different practices over six months where feedback from staff and patients was collected. Designated time for teaching/discussion was used which involved some cases from the actual clinics. Both new and follow up patients were included. Two clinics were conducted by a paediatrician and nurse whilst the 3rd involved paediatrician and local doctor. The distance from hospital to clinics varied from two miles to 22 miles approximately. All equipment used was provided by primary care. Results: A total of 30 patients were seen. All patients found the location convenient as it was nearer than the hospital. 70-90% clearly understood the reason for a change in venue. 95% agreed to the importance of their local doctor being involved in their care. 20% needed to be seen in the hospital for further investigations. Patients felt this to be a more personalised, in-depth, friendly and polite experience. Local physicians felt this to be a more relaxed, familiar and local experience for their patients and they managed to get immediate feedback regarding their own clinical management. 90% felt they gained important learning from the discussion time and the paediatrician also learned about their understanding and gaps in knowledge/focus areas. 80% felt this time was valuable for targeted learning. Equipment, information technology, and office space could be improved for the smooth running of any future clinics. Conclusion: The acute paediatric outpatient clinic can be successfully established in primary care facilities. Careful patient selection and adequate facilities are important. We have demonstrated a further step in the reduction of patient flow to hospitals and upskilling primary care health professionals. This service is expected to become more efficient with experience.

Keywords: clinics, education, paediatricians, primary care

Procedia PDF Downloads 146
308 Impact of Climate Change on Irrigation and Hydropower Potential: A Case of Upper Blue Nile Basin in Western Ethiopia

Authors: Elias Jemal Abdella

Abstract:

The Blue Nile River is an important shared resource of Ethiopia, Sudan and also, because it is the major contributor of water to the main Nile River, Egypt. Despite the potential benefits of regional cooperation and integrated joint basin management, all three countries continue to pursue unilateral plans for development. Besides, there is great uncertainty about the likely impacts of climate change in water availability for existing as well as proposed irrigation and hydropower projects in the Blue Nile Basin. The main objective of this study is to quantitatively assess the impact of climate change on the hydrological regime of the upper Blue Nile basin, western Ethiopia. Three models were combined, a dynamic Coordinated Regional Climate Downscaling Experiment (CORDEX) regional climate model (RCM) that is used to determine climate projections for the Upper Blue Nile basin for Representative Concentration Pathways (RCPs) 4.5 and 8.5 greenhouse gas emissions scenarios for the period 2021-2050. The outputs generated from multimodel ensemble of four (4) CORDEX-RCMs (i.e., rainfall and temperature) were used as input to a Soil and Water Assessment Tool (SWAT) hydrological model which was setup, calibrated and validated with observed climate and hydrological data. The outputs from the SWAT model (i.e., projections in river flow) were used as input to a Water Evaluation and Planning (WEAP) water resources model which was used to determine the water resources implications of the changes in climate. The WEAP model was set-up to simulate three development scenarios. Current Development scenario was the existing water resource development situation, Medium-term Development scenario was planned water resource development that is expected to be commissioned (i.e. before 2025) and Long-term full Development scenario were all planned water resource development likely to be commissioned (i.e. before 2050). The projected change of mean annual temperature for period (2021 – 2050) in most of the basin are warmer than the baseline (1982 -2005) average in the range of 1 to 1.4oC, implying that an increase in evapotranspiration loss. Subbasins which already distressed from drought may endure to face even greater challenges in the future. Projected mean annual precipitation varies from subbasin to subbasin; in the Eastern, North Eastern and South western highland of the basin a likely increase of mean annual precipitation up to 7% whereas in the western lowland part of the basin mean annual precipitation projected to decrease by 3%. The water use simulation indicates that currently irrigation demand in the basin is 1.29 Bm3y-1 for 122,765 ha of irrigation area. By 2025, with new schemes being developed, irrigation demand is estimated to increase to 2.5 Bm3y-1 for 277,779 ha. By 2050, irrigation demand in the basin is estimated to increase to 3.4 Bm3y-1 for 372,779 ha. The hydropower generation simulation indicates that 98 % of hydroelectricity potential could be produced if all planned dams are constructed.

Keywords: Blue Nile River, climate change, hydropower, SWAT, WEAP

Procedia PDF Downloads 326
307 Cross-Country Mitigation Policies and Cross Border Emission Taxes

Authors: Massimo Ferrari, Maria Sole Pagliari

Abstract:

Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.

Keywords: climate change, general equilibrium, optimal taxation, monetary policy

Procedia PDF Downloads 134
306 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 91
305 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs

Authors: Ignitia Motjolopane

Abstract:

Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.

Keywords: business models, innovation, generative AI, small medium enterprises

Procedia PDF Downloads 46
304 Knowledge and Practices on Waste Disposal Management Among Medical Technology Students at National University – Manila

Authors: John Peter Dacanay, Edison Ramos, Cristopher James Dicang

Abstract:

Waste management is a global concern due to increasing waste production from changing consumption patterns and population growth. Proper waste disposal management is a critical aspect of public health and environmental protection. In the healthcare industry, medical waste is generated in large quantities, and if not disposed of properly, it poses a significant threat to human health and the environment. Efficient waste management conserves natural resources and prevents harm to human health, and implementing an effective waste management system can save human lives. The study aimed to assess the level of awareness and practices on waste disposal management, highlighting the understanding of proper disposal, potential hazards, and environmental implications among Medical Technology students. This would help to provide more recommendations for improving waste management practices in healthcare settings as well as for better waste management practices in educational institutions. From the collected data, a female of 21 years of age stands out among the respondents. With the frequency and percentage of medical technology students' knowledge of laboratory waste management being high, it indicates that all respondents demonstrated a solid understanding of proper disposal methods, regulations, risks, and handling procedures related to laboratory waste. That said, the findings emphasize the significance of education and awareness programs in equipping individuals involved in laboratory practices with the necessary knowledge to handle and dispose of hazardous and infectious waste properly. Most respondents demonstrate positive practices or are highly mannered in laboratory waste management, including proper segregation and disposal in designated containers. However, there are concerns about the occasional mixing of waste types, emphasizing the reiteration of proper waste segregation. Students show a strong commitment to using personal protective equipment and promptly cleaning up spills. Some students admit to improper disposal due to rushing, highlighting the importance of time management and safety prioritization. Overall, students follow protocols for hazardous waste disposal, indicating a responsible approach. The school's waste management system is perceived as adequate, but continuous assessment and improvement are necessary. Encouraging reporting of issues and concerns is crucial for ongoing improvement and risk mitigation. The analysis reveals a moderate positive relationship between the respondents' knowledge and practices regarding laboratory waste management. The statistically significant correlation with a p-value of 0.26 (p-value 0.05) suggests that individuals with higher levels of knowledge tend to exhibit better practices. These findings align with previous research emphasizing the pivotal role of knowledge in influencing individuals' behaviors and practices concerning laboratory waste management. When individuals possess a comprehensive understanding of proper procedures, regulations, and potential risks associated with laboratory waste, they are more inclined to adopt appropriate practices. Therefore, fostering knowledge through education and training is essential in promoting responsible and effective waste management in laboratory settings.

Keywords: waste disposal management, knowledge, attitude, practices

Procedia PDF Downloads 58
303 Bridging the Gap between Teaching and Learning: A 3-S (Strength, Stamina, Speed) Model for Medical Education

Authors: Mangala. Sadasivan, Mary Hughes, Bryan Kelly

Abstract:

Medical Education must focus on bridging the gap between teaching and learning when training pre-clinical year students in skills needed to keep up with medical knowledge and to meet the demands of health care in the future. The authors were interested in showing that a 3-S Model (building strength, developing stamina, and increasing speed) using a bridged curriculum design helps connect teaching and learning and improves students’ retention of basic science and clinical knowledge. The authors designed three learning modules using the 3-S Model within a systems course in a pre-clerkship medical curriculum. Each module focused on a bridge (concept map) designed by the instructor for specific content delivered to students in the course. This with-in-subjects design study included 304 registered MSU osteopathic medical students (3 campuses) ranked by quintile based on previous coursework. The instructors used the bridge to create self-directed learning exercises (building strength) to help students master basic science content. Students were video coached on how to complete assignments, and given pre-tests and post-tests designed to give them control to assess and identify gaps in learning and strengthen connections. The instructor who designed the modules also used video lectures to help students master clinical concepts and link them (building stamina) to previously learned material connected to the bridge. Boardstyle practice questions relevant to the modules were used to help students improve access (increasing speed) to stored content. Unit Examinations covering the content within modules and materials covered by other instructors teaching within the units served as outcome measures in this study. This data was then compared to each student’s performance on a final comprehensive exam and their COMLEX medical board examinations taken some time after the course. The authors used mean comparisons to evaluate students’ performances on module items (using 3-S Model) to non-module items on unit exams, final course exam and COMLEX medical board examination. The data shows that on average, students performed significantly better on module items compared to non-module items on exams 1 and 2. The module 3 exam was canceled due to a university shut down. The difference in mean scores (module verses non-module) items disappeared on the final comprehensive exam which was rescheduled once the university resumed session. Based on Quintile designation, the mean scores were higher for module items than non-module items and the difference in scores between items for Quintiles 1 and 2 were significantly better on exam 1 and the gap widened for all Quintile groups on exam 2 and disappeared in exam 3. Based on COMLEX performance, all students on average as a group, whether they Passed or Failed, performed better on Module items than non-module items in all three exams. The gap between scores of module items for students who passed COMLEX to those who failed was greater on Exam 1 (14.3) than on Exam 2 (7.5) and Exam 3 (10.2). Data shows the 3-S Model using a bridge effectively connects teaching and learning

Keywords: bridging gap, medical education, teaching and learning, model of learning

Procedia PDF Downloads 29
302 Neonatology Clinical Routine in Cats and Dogs: Cases, Main Conditions and Mortality

Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, João C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado

Abstract:

The neonatal care of cats and dogs represents a challenge to veterinarians due to the small size of the newborns and their physiological particularities. In addition, many Veterinary Medicine colleges around the world do not include neonatology in the curriculum, which makes it less likely for the veterinarian to have basic knowledge regarding neonatal care and worsens the clinical care these patients receive. Therefore, lack of assistance and negligence have become frequent in the field, which contributes towards the high mortality rates. This study aims at describing cases and the main conditions pertaining to the neonatology clinical routine in cats and dogs, highlighting the importance of specialized care in this field of Veterinary Medicine. The study included 808 neonates admitted to the São Paulo State University (UNESP) Veterinary Hospital, Botucatu, São Paulo, Brazil, between January 2018 and November 2019. Of these, 87.3% (705/808) were dogs and 12.7% (103/808) were cats. Among the neonates admitted, 57.3% (463/808) came from emergency c-sections due to dystocia, 8.7% (71/808) cane from vaginal deliveries with obstetric maneuvers due to dystocia, and 34% (274/808) were admitted for clinical care due to neonatal conditions. Among the neonates that came from emergency c-sections and vaginal deliveries, 47.3% (253/534) was born in respiratory distress due to severe hypoxia or persistent apnea and required resuscitation procedure, such as the Jen Chung acupuncture point (VG26), oxygen therapy with mask, pulmonary expansion with resuscitator, heart massages and administration of emergency medication, such as epinephrine. On the other hand, in the neonatal clinical care, the main conditions and alterations observed in the newborns were omphalophlebitis, toxic milk syndrome, neonatal conjunctivitis, swimmer puppy syndrome, neonatal hemorrhagic syndrome, pneumonia, trauma, low weight at birth, prematurity, congenital malformations (cleft palate, cleft lip, hydrocephaly, anasarca, vascular anomalies in the heart, anal atresia, gastroschisis, omphalocele, among others), neonatal sepsis and other local and systemic bacterial infections, viral infections (feline respiratory complex, parvovirus, canine distemper, canine infectious traqueobronchitis), parasitical infections (Toxocara spp., Ancylostoma spp., Strongyloides spp., Cystoisospora spp., Babesia spp. and Giardia spp.) and fungal infections (dermatophytosis by Microsporum canis). The most common clinical presentation observed was the neonatal triad (hypothermia, hypoglycemia and dehydration), affecting 74.6% (603/808) of the patients. The mortality rate among the neonates was 10.5% (85/808). Being knowledgeable about neonatology is essential for veterinarians to provide adequate care for these patients in the clinical routine. Adding neonatology to college curriculums, improving the dissemination of information on the subject, and providing annual training in neonatology for veterinarians and employees are important to improve immediate care and reduce the mortality rates.

Keywords: neonatal care, puppies, neonatal, conditions

Procedia PDF Downloads 201
301 Opportunities and Challenges: Tracing the Evolution of India's First State-led Curriculum-based Media Literacy Intervention

Authors: Ayush Aditya

Abstract:

In today's digitised world, the extent of an individual’s social involvement is largely determined by their interaction over the internet. The Internet has emerged as a primary source of information consumption and a reliable medium for receiving updates on everyday activities. Owing to this change in the information consumption pattern, the internet has also emerged as a hotbed of misinformation. Experts are of the view that media literacy has emerged as one of the most effective strategies for addressing the issue of misinformation. This paper aims to study the evolution of the Kerala government's media literacy policy, its implementation strategy, challenges and opportunities. The objective of this paper is to create a conceptual framework containing details of the implementation strategy based on the Kerala model. Extensive secondary research of literature, newspaper articles, and other online sources was carried out to locate the timeline of this policy. This was followed by semi-structured interview discussions with government officials from Kerala to trace the origin and evolution of this policy. Preliminary findings based on the collected data suggest that this policy is a case of policy by chance, as the officer who headed this policy during the state level implementation was the one who has already piloted a media literacy program in a district called Kannur as the district collector. Through this paper, an attempt is made to trace the history of the media literacy policy starting from the Kannur intervention in 2018, which was started to address the issue of vaccine hesitancy around measles rubella(MR) vaccination. If not for the vaccine hesitancy, this program would not have been rolled out in Kannur. Interviews with government officials suggest that when authorities decided to take up this initiative in 2020, a huge amount of misinformation emerging during the COVID-19 pandemic was the trigger. There was misinformation regarding government orders, healthcare facilities, vaccination, and lockdown regulations, which affected everyone, unlike the case of Kannur, where it was only a certain age group of kids. As a solution to this problem, the state government decided to create a media literacy curriculum to be taught in all government schools of the state starting from standard 8 till graduation. This was a tricky task, as a new course had to be immediately introduced in the school curriculum amid all the disruptions in the education system caused by the pandemic. It was revealed during the interview that in the case of the state-wide implementation, every step involved multiple checks and balances, unlike the earlier program where stakeholders were roped-in as and when the need emerged. On the pedagogy, while the training during the pilot could be managed through PowerPoint presentation, designing a state-wide curriculum involved multiple iterations and expert approvals. The reason for this is COVID-19 related misinformation has lost its significance. In the next phase of the research, an attempt will be made to compare other aspects of the pilot implementation with the state-wide implementation.

Keywords: media literacy, digital media literacy, curriculum based media literacy intervention, misinformation

Procedia PDF Downloads 65
300 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)

Authors: Aliya K. Salahova

Abstract:

Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.

Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study

Procedia PDF Downloads 42
299 Metal-Semiconductor Transition in Ultra-Thin Titanium Oxynitride Films Deposited by ALD

Authors: Farzan Gity, Lida Ansari, Ian M. Povey, Roger E. Nagle, James C. Greer

Abstract:

Titanium nitride (TiN) films have been widely used in variety of fields, due to its unique electrical, chemical, physical and mechanical properties, including low electrical resistivity, chemical stability, and high thermal conductivity. In microelectronic devices, thin continuous TiN films are commonly used as diffusion barrier and metal gate material. However, as the film thickness decreases below a few nanometers, electrical properties of the film alter considerably. In this study, the physical and electrical characteristics of 1.5nm to 22nm thin films deposited by Plasma-Enhanced Atomic Layer Deposition (PE-ALD) using Tetrakis(dimethylamino)titanium(IV), (TDMAT) chemistry and Ar/N2 plasma on 80nm SiO2 capped in-situ by 2nm Al2O3 are investigated. ALD technique allows uniformly-thick films at monolayer level in a highly controlled manner. The chemistry incorporates low level of oxygen into the TiN films forming titanium oxynitride (TiON). Thickness of the films is characterized by Transmission Electron Microscopy (TEM) which confirms the uniformity of the films. Surface morphology of the films is investigated by Atomic Force Microscopy (AFM) indicating sub-nanometer surface roughness. Hall measurements are performed to determine the parameters such as carrier mobility, type and concentration, as well as resistivity. The >5nm-thick films exhibit metallic behavior; however, we have observed that thin film resistivity is modulated significantly by film thickness such that there are more than 5 orders of magnitude increment in the sheet resistance at room temperature when comparing 5nm and 1.5nm films. Scattering effects at interfaces and grain boundaries could play a role in thickness-dependent resistivity in addition to quantum confinement effect that could occur at ultra-thin films: based on our measurements the carrier concentration is decreased from 1.5E22 1/cm3 to 5.5E17 1/cm3, while the mobility is increased from < 0.1 cm2/V.s to ~4 cm2/V.s for the 5nm and 1.5nm films, respectively. Also, measurements at different temperatures indicate that the resistivity is relatively constant for the 5nm film, while for the 1.5nm film more than 2 orders of magnitude reduction has been observed over the range of 220K to 400K. The activation energy of the 2.5nm and 1.5nm films is 30meV and 125meV, respectively, indicating that the TiON ultra-thin films are exhibiting semiconducting behaviour attributing this effect to a metal-semiconductor transition. By the same token, the contact is no longer Ohmic for the thinnest film (i.e., 1.5nm-thick film); hence, a modified lift-off process was developed to selectively deposit thicker films allowing us to perform electrical measurements with low contact resistance on the raised contact regions. Our atomic scale simulations based on molecular dynamic-generated amorphous TiON structures with low oxygen content confirm our experimental observations indicating highly n-type thin films.

Keywords: activation energy, ALD, metal-semiconductor transition, resistivity, titanium oxynitride, ultra-thin film

Procedia PDF Downloads 267