Search results for: optical polarization feedback
156 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program
Authors: Pauline Calleja, Brooke Alexander
Abstract:
In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education
Procedia PDF Downloads 266155 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording
Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen
Abstract:
It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration
Procedia PDF Downloads 178154 The Relationships between Sustainable Supply Chain Management Practices, Digital Transformation, and Enterprise Performance in Vietnam
Authors: Thi Phuong Pham
Abstract:
This paper explores the intricate relationships between Sustainable Supply Chain Management (SSCM) practices, digital transformation (DT), and enterprise performance within the context of Vietnam. Over the past two decades, there has been a paradigm shift in supply chain management, with sustainability gaining prominence due to increasing concerns about climate change, labor practices, and the environmental impact of business operations. In the ever-evolving realm of global business, sustainability and digital transformation (DT) intersecting dynamics have become pivotal catalysts for organizational success. This research investigates how integrating SSCM with DT can enhance enterprise performance, a subject of significant relevance as Vietnam undergoes rapid economic growth and digital transformation. The primary objectives of this research are twofold: (1) to examine the effects of SSCM practices on enterprise performance in three critical aspects: economic, environmental, and social performance in Vietnam and (2) to explore the mediating role of DT in this relationship. By analyzing these dynamics, the study aims to provide valuable insights for policymakers and the academic community regarding the potential benefits of aligning SSCM principles with digital technologies. To achieve these objectives, the research employs a robust mixed-method approach. The research begins with a comprehensive literature review to establish a theoretical framework that underpins the empirical analysis. Data collection was conducted through a structured survey targeting Vietnamese enterprises, with the survey instrument designed to measure SSCM practices, DT, and enterprise performance using a five-point Likert scale. The reliability and validity of the survey were ensured by pre-testing with industry practitioners and refining the questionnaire based on their feedback. For data analysis, structural equation modeling (SEM) was employed to quantify the direct effects of SSCM on enterprise performance, while mediation analysis using the PROCESS Macro 4.0 in SPSS was conducted to assess the mediating role of DT. The findings reveal that SSCM practices positively influence enterprise performance by enhancing operational efficiency, reducing costs, and improving sustainability metrics. Furthermore, DT acts as a significant mediator, amplifying the positive impacts of SSCM practices through improved data management, enhanced communication, and more agile supply chain processes. These results underscore the critical role of DT in maximizing the benefits of SSCM practices, particularly in a developing economy like Vietnam. This research contributes to the existing body of knowledge by highlighting the synergistic effects of SSCM and DT on enterprise performance. It offers practical implications for businesses that enhance their sustainability and digital capabilities, providing a roadmap for integrating these two pivotal aspects to achieve competitive advantage. The study's insights can also inform governmental policies designed to foster sustainable economic growth and digital innovation in Vietnam.Keywords: sustainable supply chain management, digital transformation, enterprise performance, Vietnam
Procedia PDF Downloads 23153 High-Resolution Facial Electromyography in Freely Behaving Humans
Authors: Lilah Inzelberg, David Rand, Stanislav Steinberg, Moshe David Pur, Yael Hanein
Abstract:
Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.Keywords: affective expressions, affective processing, facial EMG, high-resolution electromyography, independent component analysis, wireless electrodes
Procedia PDF Downloads 244152 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil
Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee
Abstract:
Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy
Procedia PDF Downloads 241151 A Randomised Controlled Trial and Process Evaluation of the Lifestart Parenting Programme
Authors: Sharon Millen, Sarah Miller, Laura Dunne, Clare McGeady, Laura Neeson
Abstract:
This paper presents the findings from a randomised controlled trial (RCT) and process evaluation of the Lifestart parenting programme. Lifestart is a structured child-centred programme of information and practical activity for parents of children aged from birth to five years of age. It is delivered to parents in their own homes by trained, paid family visitors and it is offered to parents regardless of their social, economic or other circumstances. The RCT evaluated the effectiveness of the programme and the process evaluation documented programme delivery and included a qualitative exploration of parent and child outcomes. 424 parents and children participated in the RCT: 216 in the intervention group and 208 in the control group across the island of Ireland. Parent outcomes included: parental knowledge of child development, parental efficacy, stress, social support, parenting skills and embeddedness in the community. Child outcomes included cognitive, language and motor development and social-emotional and behavioural development. Both groups were tested at baseline (when children were less than 1 year old), mid-point (aged 3) and at post-test (aged 5). Data were collected during a home visit, which took two hours. The process evaluation consisted of interviews with parents (n=16 at baseline and end-point), and focus groups with Lifestart Coordinators (n=9) and Family Visitors (n=24). Quantitative findings from the RCT indicated that, compared to the control group, parents who received the Lifestart programme reported reduced parenting-related stress, increased knowledge of their child’s development, and improved confidence in their parenting role. These changes were statistically significant and consistent with the hypothesised pathway of change depicted in the logic model. There was no evidence of any change in parents’ embeddedness in the community. Although four of the five child outcomes showed small positive change for children who took part in the programme, these were not statistically significant and there is no evidence that the programme improves child cognitive and non-cognitive skills by immediate post-test. The qualitative process evaluation highlighted important challenges related to conducting trials of this magnitude and design in the general population. Parents reported that a key incentive to take part in study was receiving feedback from the developmental assessment, which formed part of the data collection. This highlights the potential importance of appropriate incentives in relation to recruitment and retention of participants. The interviews with intervention parents indicated that one of the first changes they experienced as a result of the Lifestart programme was increased knowledge and confidence in their parenting ability. The outcomes and pathways perceived by parents and described in the interviews are also consistent with the findings of the RCT and the theory of change underpinning the programme. This hypothesises that improvement in parental outcomes, arising as a consequence of the programme, mediate the change in child outcomes. Parents receiving the Lifestart programme reported great satisfaction with and commitment to the programme, with the role of the Family Visitor being identified as one of the key components of the programme.Keywords: parent-child relationship, parental self-efficacy, parental stress, school readiness
Procedia PDF Downloads 444150 Designing Short-Term Study Abroad Programs for Graduate Students: The Case of Morocco
Authors: Elaine Crable, Amit Sen
Abstract:
Short-term study abroad programs have become a mainstay of MBA programs. The benefits of international business experiences, along with its exposure to global cultures, are well documented. However, developing a rewarding study, abroad program at the graduate level can be challenging for Faculty, especially when devising such a program for a group of part-time MBA students who come with a wide range of experiences and demographic characteristics. Each student has individual expectations for the study abroad experience. This study provides suggestions and considerations for Faculty that are planning to design a short-term study abroad program, especially for part-time MBA students. Insights are based on a recent experience leading a group of twenty-one students on a ten-day program to Morocco. The trip was designed and facilitated by two faculty members and a local Moroccan facilitator. This experience led to a number of insights and recommendations. First, the choice of location is critical. The choice of Morocco was very deliberate, owing to its multi-faceted cultural landscape and international business interest. It is an Islamic State with close ties to Europe both culturally and geographically and Morocco is a multi-lingual country with some combination of three languages spoken by most – English, Arabic, and French. Second, collaboration with a local ‘academic’ partner allowed the level of instruction to be both rigorous and significantly more engaging. Third, allowing students to participate in the planning of the trip enabled the trip participants to collaborate, negotiate, and share their own experiences and strengths. The pre-trip engagement was structured by creating four sub-groups, each responsible for an assigned city. Each student sub-group had to provide a historical background of the assigned city, plan the itinerary including sites to visit, cuisine to experience, industries to explore, markets to visit, plus provide a budget for that city’s expenses. The pre-planning segment of the course was critical for the success of the program as students were able to contribute to the design of the program through collaboration and negotiation with their peers. Fourth, each student sub-group was assigned industry to study within Morocco. The student sub-group prepared a presentation and a group paper with their analysis of the chosen industries. The pre-planning activities created strong bonds among the trip participants, which was evident when faced with on-ground challenges, especially when it was necessary to quickly evacuate due to a surprise USA COVID evacuation notice. The entire group supported each other when quickly making their way back to the United States. Unfortunately, the trip was cut short by two days due to this emergency exit, but the feedback regarding the program was very positive all around. While the program design put pressure on the Faculty leads regarding planning and coordination upfront, the outcome in terms of student engagement, student learning, collaboration and negotiation were all favorable and worth the effort. Finally, an added value, the cost of the program for the student was significantly lower compared to running a program with a professional provider.Keywords: business education, experiential learning, international education, study abroad
Procedia PDF Downloads 167149 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 311148 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers
Authors: Pawel Martynowicz
Abstract:
Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines
Procedia PDF Downloads 123147 Virtual Reality Applications for Building Indoor Engineering: Circulation Way-Finding
Authors: Atefeh Omidkhah Kharashtomi, Rasoul Hedayat Nejad, Saeed Bakhtiyari
Abstract:
Circulation paths and indoor connection network of the building play an important role both in the daily operation of the building and during evacuation in emergency situations. The degree of legibility of the paths for navigation inside the building has a deep connection with the perceptive and cognitive system of human, and the way the surrounding environment is being perceived. Human perception of the space is based on the sensory systems in a three-dimensional environment, and non-linearly, so it is necessary to avoid reducing its representations in architectural design as a two-dimensional and linear issue. Today, the advances in the field of virtual reality (VR) technology have led to various applications, and architecture and building science can benefit greatly from these capabilities. Especially in cases where the design solution requires a detailed and complete understanding of the human perception of the environment and the behavioral response, special attention to VR technologies could be a priority. Way-finding in the indoor circulation network is a proper example for such application. Success in way-finding could be achieved if human perception of the route and the behavioral reaction have been considered in advance and reflected in the architectural design. This paper discusses the VR technology applications for the way-finding improvements in indoor engineering of the building. In a systematic review, with a database consisting of numerous studies, firstly, four categories for VR applications for circulation way-finding have been identified: 1) data collection of key parameters, 2) comparison of the effect of each parameter in virtual environment versus real world (in order to improve the design), 3) comparing experiment results in the application of different VR devices/ methods with each other or with the results of building simulation, and 4) training and planning. Since the costs of technical equipment and knowledge required to use VR tools lead to the limitation of its use for all design projects, priority buildings for the use of VR during design are introduced based on case-studies analysis. The results indicate that VR technology provides opportunities for designers to solve complex buildings design challenges in an effective and efficient manner. Then environmental parameters and the architecture of the circulation routes (indicators such as route configuration, topology, signs, structural and non-structural components, etc.) and the characteristics of each (metrics such as dimensions, proportions, color, transparency, texture, etc.) are classified for the VR way-finding experiments. Then, according to human behavior and reaction in the movement-related issues, the necessity of scenario-based and experiment design for using VR technology to improve the design and receive feedback from the test participants has been described. The parameters related to the scenario design are presented in a flowchart in the form of test design, data determination and interpretation, recording results, analysis, errors, validation and reporting. Also, the experiment environment design is discussed for equipment selection according to the scenario, parameters under study as well as creating the sense of illusion in the terms of place illusion, plausibility and illusion of body ownership.Keywords: virtual reality (VR), way-finding, indoor, circulation, design
Procedia PDF Downloads 73146 Solar Liquid Desiccant Regenerator for Two Stage KCOOH Based Fresh Air Dehumidifier
Authors: M. V. Rane, Tareke Tekia
Abstract:
Liquid desiccant based fresh air dehumidifiers can be gainfully deployed for air-conditioning, agro-produce drying and in many industrial processes. Regeneration of liquid desiccant can be done using direct firing, high temperature waste heat or solar energy. Solar energy is clean and available in abundance; however, it is costly to collect. A two stage liquid desiccant fresh air dehumidification system can offer Coefficient of Performance (COP), in the range of 1.6 to 2 for comfort air conditioning applications. High COP helps reduce the size and cost of collectors required. Performance tests on high temperature regenerator of a two stage liquid desiccant fresh air dehumidifier coupled with seasonally tracked flat plate like solar collector will be presented in this paper. The two stage fresh air dehumidifier has four major components: High Temperature Regenerator (HTR), Low Temperature Regenerator (LTR), High and Low Temperature Solution Heat Exchangers and Fresh Air Dehumidifier (FAD). This open system can operate at near atmospheric pressure in all the components. These systems can be simple, maintenance-free and scalable. Environmentally benign, non-corrosive, moderately priced Potassium Formate, KCOOH, is used as a liquid desiccant. Typical KCOOH concentration in the system is expected to vary between 65 and 75%. Dilute liquid desiccant at 65% concentration exiting the fresh air dehumidifier will be pumped and preheated in solution heat exchangers before entering the high temperature solar regenerator. In the solar collector, solution will be regenerated to intermediate concentration of 70%. Steam and saturated solution exiting the solar collector array will be separated. Steam at near atmospheric pressure will then be used to regenerate the intermediate concentration solution up to a concentration of 75% in a low temperature regenerator where moisture vaporized be released in to atmosphere. Condensed steam can be used as potable water after adding a pinch of salt and some nutrient. Warm concentrated liquid desiccant will be routed to solution heat exchanger to recycle its heat to preheat the weak liquid desiccant solution. Evacuated glass tube based seasonally tracked solar collector is used for regeneration of liquid desiccant at high temperature. Temperature of regeneration for KCOOH is 133°C at 70% concentration. The medium temperature collector was designed for temperature range of 100 to 150°C. Double wall polycarbonate top cover helps reduce top losses. Absorber integrated heat storage helps stabilize the temperature of liquid desiccant exiting the collectors during intermittent cloudy conditions, and extends the operation of the system by couple of hours beyond the sunshine hours. This solar collector is light in weight, 12 kg/m2 without absorber integrated heat storage material, and 27 kg/m2 with heat storage material. Cost of the collector is estimated to be 10,000 INR/m2. Theoretical modeling of the collector has shown that the optical efficiency is 62%. Performance test of regeneration of KCOOH will be reported.Keywords: solar, liquid desiccant, dehumidification, air conditioning, regeneration
Procedia PDF Downloads 347145 Modified Graphene Oxide in Ceramic Composite
Authors: Natia Jalagonia, Jimsher Maisuradze, Karlo Barbakadze, Tinatin Kuchukhidze
Abstract:
At present intensive scientific researches of ceramics, cermets and metal alloys have been conducted for improving materials physical-mechanical characteristics. In purpose of increasing impact strength of ceramics based on alumina, simple method of graphene homogenization was developed. Homogeneous distribution of graphene (homogenization) in pressing composite became possible through the connection of functional groups of graphene oxide (-OH, -COOH, -O-O- and others) and alumina superficial OH groups with aluminum organic compounds. These two components connect with each other with -O-Al–O- bonds, and by their thermal treatment (300–500°C), graphene and alumina phase are transformed. Thus, choosing of aluminum organic compounds for modification is stipulated by the following opinion: aluminum organic compounds fragments fixed on graphene and alumina finally are transformed into an integral part of the matrix. By using of other elements as modifier on the matrix surface (Al2O3) other phases are transformed, which change sharply physical-mechanical properties of ceramic composites, for this reason, effect caused by the inclusion of graphene will be unknown. Fixing graphene fragments on alumina surface by alumoorganic compounds result in new type graphene-alumina complex, in which these two components are connected by C-O-Al bonds. Part of carbon atoms in graphene oxide are in sp3 hybrid state, so functional groups (-OH, -COOH) are located on both sides of graphene oxide layer. Aluminum organic compound reacts with graphene oxide at the room temperature, and modified graphene oxide is obtained: R2Al-O-[graphene]–COOAlR2. Remaining Al–C bonds also reacts rapidly with surface OH groups of alumina. In a result of these process, pressing powdery composite [Al2O3]-O-Al-O-[graphene]–COO–Al–O–[Al2O3] is obtained. For the purpose, graphene oxide suspension in dry toluene have added alumoorganic compound Al(iC4H9)3 in toluene with equimolecular ratio. Obtained suspension has put in the flask and removed solution in a rotary evaporate presence nitrogen atmosphere. Obtained powdery have been researched and used to consolidation of ceramic materials based on alumina. Ceramic composites are obtained in high temperature vacuum furnace with different temperature and pressure conditions. Received ceramics do not have open pores and their density reaches 99.5 % of TD. During the work, the following devices have been used: High temperature vacuum furnace OXY-GON Industries Inc (USA), device of spark-plasma synthesis, induction furnace, Electronic Scanning Microscopes Nikon Eclipse LV 150, Optical Microscope NMM-800TRF, Planetary mill Pulverisette 7 premium line, Shimadzu Dynamic Ultra Micro Hardness Tester DUH-211S, Analysette 12 Dynasizer and others.Keywords: graphene oxide, alumo-organic, ceramic
Procedia PDF Downloads 307144 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry
Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development
Procedia PDF Downloads 90143 Blended Learning in a Mathematics Classroom: A Focus in Khan Academy
Authors: Sibawu Witness Siyepu
Abstract:
This study explores the effects of instructional design using blended learning in the learning of radian measures among Engineering students. Blended learning is an education programme that combines online digital media with traditional classroom methods. It requires the physical presence of both lecturer and student in a mathematics computer laboratory. Blended learning provides element of class control over time, place, path or pace. The focus was on the use of Khan Academy to supplement traditional classroom interactions. Khan Academy is a non-profit educational organisation created by educator Salman Khan with a goal of creating an accessible place for students to learn through watching videos in a computer assisted computer. The researcher who is an also lecturer in mathematics support programme collected data through instructing students to watch Khan Academy videos on radian measures, and by supplying students with traditional classroom activities. Classroom activities entails radian measure activities extracted from the Internet. Students were given an opportunity to engage in class discussions, social interactions and collaborations. These activities necessitated students to write formative assessments tests. The purpose of formative assessments tests was to find out about the students’ understanding of radian measures, including errors and misconceptions they displayed in their calculations. Identification of errors and misconceptions serve as pointers of students’ weaknesses and strengths in their learning of radian measures. At the end of data collection, semi-structure interviews were administered to a purposefully sampled group to explore their perceptions and feedback regarding the use of blended learning approach in teaching and learning of radian measures. The study employed Algebraic Insight Framework to analyse data collected. Algebraic Insight Framework is a subset of symbol sense which allows a student to correctly enter expressions into a computer assisted systems efficiently. This study offers students opportunities to enter topics and subtopics on radian measures into a computer through the lens of Khan Academy. Khan academy demonstrates procedures followed to reach solutions of mathematical problems. The researcher performed the task of explaining mathematical concepts and facilitated the process of reinvention of rules and formulae in the learning of radian measures. Lastly, activities that reinforce students’ understanding of radian were distributed. Results showed that this study enthused the students in their learning of radian measures. Learning through videos prompted the students to ask questions which brought about clarity and sense making to the classroom discussions. Data revealed that sense making through reinvention of rules and formulae assisted the students in enhancing their learning of radian measures. This study recommends the use of Khan Academy in blended learning to be introduced as a socialisation programme to all first year students. This will prepare students that are computer illiterate to become conversant with the use of Khan Academy as a powerful tool in the learning of mathematics. Khan Academy is a key technological tool that is pivotal for the development of students’ autonomy in the learning of mathematics and that promotes collaboration with lecturers and peers.Keywords: algebraic insight framework, blended learning, Khan Academy, radian measures
Procedia PDF Downloads 307142 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 103141 Photoemission Momentum Microscopy of Graphene on Ir (111)
Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense
Abstract:
Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.Keywords: band structure, graphene, momentum microscopy, LDAD
Procedia PDF Downloads 339140 Applying an Automatic Speech Intelligent System to the Health Care of Patients Undergoing Long-Term Hemodialysis
Authors: Kuo-Kai Lin, Po-Lun Chang
Abstract:
Research Background and Purpose: Following the development of the Internet and multimedia, the Internet and information technology have become crucial avenues of modern communication and knowledge acquisition. The advantages of using mobile devices for learning include making learning borderless and accessible. Mobile learning has become a trend in disease management and health promotion in recent years. End-stage renal disease (ESRD) is an irreversible chronic disease, and patients who do not receive kidney transplants can only rely on hemodialysis or peritoneal dialysis to survive. Due to the complexities in caregiving for patients with ESRD that stem from their advanced age and other comorbidities, the patients’ incapacity of self-care leads to an increase in the need to rely on their families or primary caregivers, although whether the primary caregivers adequately understand and implement patient care is a topic of concern. Therefore, this study explored whether primary caregivers’ health care provisions can be improved through the intervention of an automatic speech intelligent system, thereby improving the objective health outcomes of patients undergoing long-term dialysis. Method: This study developed an automatic speech intelligent system with healthcare functions such as health information voice prompt, two-way feedback, real-time push notification, and health information delivery. Convenience sampling was adopted to recruit eligible patients from a hemodialysis center at a regional teaching hospital as research participants. A one-group pretest-posttest design was adopted. Descriptive and inferential statistics were calculated from the demographic information collected from questionnaires answered by patients and primary caregivers, and from a medical record review, a health care scale (recorded six months before and after the implementation of intervention measures), a subjective health assessment, and a report of objective physiological indicators. The changes in health care behaviors, subjective health status, and physiological indicators before and after the intervention of the proposed automatic speech intelligent system were then compared. Conclusion and Discussion: The preliminary automatic speech intelligent system developed in this study was tested with 20 pretest patients at the recruitment location, and their health care capacity scores improved from 59.1 to 72.8; comparisons through a nonparametric test indicated a significant difference (p < .01). The average score for their subjective health assessment rose from 2.8 to 3.3. A survey of their objective physiological indicators discovered that the compliance rate for the blood potassium level was the most significant indicator; its average compliance rate increased from 81% to 94%. The results demonstrated that this automatic speech intelligent system yielded a higher efficacy for chronic disease care than did conventional health education delivered by nurses. Therefore, future efforts will continue to increase the number of recruited patients and to refine the intelligent system. Future improvements to the intelligent system can be expected to enhance its effectiveness even further.Keywords: automatic speech intelligent system for health care, primary caregiver, long-term hemodialysis, health care capabilities, health outcomes
Procedia PDF Downloads 109139 Scientific and Regulatory Challenges of Advanced Therapy Medicinal Products
Authors: Alaa Abdellatif, Gabrièle Breda
Abstract:
Background. Advanced therapy medicinal products (ATMPs) are innovative therapies that mainly target orphan diseases and high unmet medical needs. ATMP includes gene therapy medicinal products (GTMP), somatic cell therapy medicinal products (CTMP), and tissue-engineered therapies (TEP). Since legislation opened the way in 2007, 25 ATMPs have been approved in the EU, which is about the same amount as the U.S. Food and Drug Administration. However, not all of the ATMPs that have been approved have successfully reached the market and retained their approval. Objectives. We aim to understand all the factors limiting the market access to very promising therapies in a systemic approach, to be able to overcome these problems, in the future, with scientific, regulatory and commercial innovations. Further to recent reviews that focus either on specific countries, products, or dimensions, we will address all the challenges faced by ATMP development today. Methodology. We used mixed methods and a multi-level approach for data collection. First, we performed an updated academic literature review on ATMP development and their scientific and market access challenges (papers published between 2018 and April 2023). Second, we analyzed industry feedback from cell and gene therapy webinars and white papers published by providers and pharmaceutical industries. Finally, we established a comparative analysis of the regulatory guidelines published by EMA and the FDA for ATMP approval. Results: The main challenges in bringing these therapies to market are the high development costs. Developing ATMPs is expensive due to the need for specialized manufacturing processes. Furthermore, the regulatory pathways for ATMPs are often complex and can vary between countries, making it challenging to obtain approval and ensure compliance with different regulations. As a result of the high costs associated with ATMPs, challenges in obtaining reimbursement from healthcare payers lead to limited patient access to these treatments. ATMPs are often developed for orphan diseases, which means that the patient population is limited for clinical trials which can make it challenging to demonstrate their safety and efficacy. In addition, the complex manufacturing processes required for ATMPs can make it challenging to scale up production to meet demand, which can limit their availability and increase costs. Finally, ATMPs face safety and efficacy challenges: dangerous adverse events of these therapies like toxicity related to the use of viral vectors or cell therapy, starting material and donor-related aspects. Conclusion. As a result of our mixed method analysis, we found that ATMPs face a number of challenges in their development, regulatory approval, and commercialization and that addressing these challenges requires collaboration between industry, regulators, healthcare providers, and patient groups. This first analysis will help us to address, for each challenge, proper and innovative solution(s) in order to increase the number of ATMPs approved and reach the patientsKeywords: advanced therapy medicinal products (ATMPs), product development, market access, innovation
Procedia PDF Downloads 75138 Increasing Adherence to Preventative Care Bundles for Healthcare-Associated Infections: The Impact of Nurse Education
Authors: Lauren G. Coggins
Abstract:
Catheter-associated urinary tract infections (CAUTI) and central line-associated bloodstream infections (CLABSI) are among the most common healthcare-associated infections (HAI), contributing to prolonged lengths of stay, greater costs of patient care, and increased patient mortality. Evidence-based preventative care bundles exist to establish consistent, safe patient-care practices throughout an entire organization, helping to ensure the collective application of care strategies that aim to improve patient outcomes and minimize complications. The cardiac intensive care unit at a nationally ranked teaching and research hospital in the United States exceeded its annual CAUTI and CLABSI targets in the fiscal year 2019, prompting examination into the unit’s infection prevention efforts that included preventative care bundles for both HAIs. Adherence to the CAUTI and CLABSI preventative care bundles was evaluated through frequent audits conducted over three months, using standards and resources from The Joint Commission, a globally recognized leader in quality improvement in healthcare and patient care safety. The bundle elements with the lowest scores were identified as the most commonly missed elements. Three elements from both bundles, six elements in total, served as key content areas for the educational interventions targeted to bedside nurses. The CAUTI elements included appropriate urinary catheter order, appropriate continuation criteria, and urinary catheter care. The CLABSI elements included primary tubing compliance, needleless connector compliance, and dressing change compliance. An integrated, multi-platform education campaign featured content on each CAUTI and CLABSI preventative care bundle in its entirety, with additional reinforcement focused on the lowest scoring elements. One-on-one educational materials included an informational pamphlet, badge buddy, a presentation to reinforce nursing care standards, and real-time application through case studies and electronic health record demonstrations. A digital hub was developed on the hospital’s Intranet for quick access to unit resources, and a bulletin board helped track the number of days since the last CAUTI and CLABSI incident. Audits continued to be conducted throughout the education campaign, and staff were given real-time feedback to address any gaps in adherence. Nearly every nurse in the cardiac intensive care unit received all educational materials, and adherence to all six key bundle elements increased after the implementation of educational interventions. Recommendations from this implementation include providing consistent, comprehensive education across multiple teaching tools and regular audits to track adherence. The multi-platform education campaign brought focus to the evidence-based CAUTI and CLABSI bundles, which in turn will help to reduce CAUTI and CLABSI rates in clinical practice.Keywords: education, healthcare-associated infections, infection, nursing, prevention
Procedia PDF Downloads 116137 Leuco Dye-Based Thermochromic Systems for Application in Temperature Sensing
Authors: Magdalena Wilk-Kozubek, Magdalena Rowińska, Krzysztof Rola, Joanna Cybińska
Abstract:
Leuco dye-based thermochromic systems are classified as intelligent materials because they exhibit thermally induced color changes. Thanks to this feature, they are mainly used as temperature sensors in many industrial sectors. For example, placing a thermochromic material on a chemical reactor may warn about exceeding the maximum permitted temperature for a chemical process. Usually two components, a color former and a developer are needed to produce a system with irreversible color change. The color former is an electron donating (proton accepting) compound such as fluoran leuco dye. The developer is an electron accepting (proton donating) compound such as organic carboxylic acid. When the developer melts, the color former - developer complex is created and the termochromic system becomes colored. Typically, the melting point of the applied developer determines the temperature at which the color change occurs. When the lactone ring of the color former is closed, then the dye is in its colorless state. The ring opening, induced by the addition of a proton, causes the dye to turn into its colored state. Since the color former and the developer are often solid, they can be incorporated into polymer films to facilitate their practical use in industry. The objective of this research was to fabricate a leuco dye-based termochromic system that will irreversibly change color after reaching the temperature of 100°C. For this purpose, benzofluoran leuco dye (as color former) and phenoxyacetic acid (as developer with a melting point of 100°C) were introduced into the polymer films during the drop casting process. The film preparation process was optimized in order to obtain thin films with appropriate properties such as transparency, flexibility and homogeneity. Among the optimized factors were the concentration of benzofluoran leuco dye and phenoxyacetic acid, the type, average molecular weight and concentration of the polymer, and the type and concentration of the surfactant. The selected films, containing benzofluoran leuco dye and phenoxyacetic acid, were combined by mild heat treatment. Structural characterization of single and combined films was carried out by FTIR spectroscopy, morphological analysis was performed by optical microscopy and SEM, phase transitions were examined by DSC, color changes were investigated by digital photography and UV-Vis spectroscopy, while emission changes were studied by photoluminescence spectroscopy. The resulting thermochromic system is colorless at room temperature, but after reaching 100°C the developer melts and it turns irreversibly pink. Therefore, it could be used as an additional sensor to warn against boiling of water in power plants using water cooling. Currently used electronic temperature indicators are prone to faults and unwanted third-party actions. The sensor constructed in this work is transparent, thanks to which it can be unnoticed by an outsider and constitute a reliable reference for the person responsible for the apparatus.Keywords: color developer, leuco dye, thin film, thermochromism
Procedia PDF Downloads 98136 What We Know About Effective Learning for Pupils with SEN: Results of 2 Systematic Reviews and of a Global Classroom
Authors: Claudia Mertens, Amanda Shufflebarger
Abstract:
Step one: What we know about effective learning for pupils with SEN: results of 2 systematic reviews: Before establishing principles and practices for teaching and learning of pupils with SEN, we need a good overview of the results of empirical studies conducted in the respective field. Therefore, two systematic reviews on the use of digital tools in inclusive and non-inclusive school settings were conducted - taking into consideration studies published in German: One systematic review included studies having undergone a peer review process, and the second included studies without peer review). The results (collaboration of two German universities) will be presented during the conference. Step two: Students’ results of a research lab on “inclusive media education”: On this basis, German students worked on “inclusive media education” in small research projects (duration: 1 year). They were “education majors” enrolled in a course on inclusive media education. They conducted research projects on topics ranging from smartboards in inclusive settings, digital media in gifted math education, Tik Tok in German as a Foreign Language education and many more. As part of their course, the German students created an academic conference poster. In the conference, the results of these research projects/papers are put into the context of the results of the systematic reviews. Step three: Global Classroom: The German students’ posters were critically discussed in a global classroom in cooperation with Indiana University East (USA) and Hamburg University (Germany) in the winter/spring term of 2022/2023. 15 students in Germany collaborated with 15 students at Indiana University East. The IU East student participants were enrolled in “Writing in the Arts and Sciences,” which is specifically designed for pre-service teachers. The joint work began at the beginning of the Spring 2023 semester in January 2023 and continued until the end of the Uni Hamburg semester in February 2023. Before January, Uni Hamburg students had been working on a research project individually or in pairs. Didactic Approach: Both groups of students posted a brief video or audio introduction to a shared Canvas discussion page. In the joint long synchronous session, the students discussed key content terms such as inclusion, inclusive, diversity, etc., with the help of prompt cards, and they compared how they understood or applied these terms differently. Uni Hamburg students presented drafts of academic posters. IU East students gave them specific feedback. After that, IU East students wrote brief reflections summarizing what they learned from the poster. After the class, small groups were expected to create a voice recording reflecting on their experiences. In their recordings, they examined critical incidents, highlighting what they learned from these incidents. Major results of the student research and of the global classroom collaboration can be highlighted during the conference. Results: The aggregated results of the two systematic reviews AND of the research lab/global classroom can now be a sound basis for 1) improving accessibility for students with SEN and 2) for adjusting teaching materials and concepts to the needs of the students with SEN - in order to create successful learning.Keywords: digitalization, inclusion, inclusive media education, global classroom, systematic review
Procedia PDF Downloads 81135 Cuban's Supply Chains Development Model: Qualitative and Quantitative Impact on Final Consumers
Authors: Teresita Lopez Joy, Jose A. Acevedo Suarez, Martha I. Gomez Acosta, Ana Julia Acevedo Urquiaga
Abstract:
Current trends in business competitiveness indicate the need to manage businesses as supply chains and not in isolation. The use of strategies aimed at maximum satisfaction of customers in a network and based on inter-company cooperation; contribute to obtaining successful joint results. In the Cuban economic context, the development of productive linkages to achieve integrated management of supply chains is considering a key aspect. In order to achieve this jump, it is necessary to develop acting capabilities in the entities that make up the chains through a systematic procedure that allows arriving at a management model in consonance with the environment. The objective of the research focuses on: designing a model and procedure for the development of integrated management of supply chains in economic entities. The results obtained are: the Model and the Procedure for the Development of the Supply Chains Integrated Management (MP-SCIM). The Model is based on the development of logistics in the network actors, the joint work between companies, collaborative planning and the monitoring of a main indicator according to the end customers. The application Procedure starts from the well-founded need for development in a supply chain and focuses on training entrepreneurs as doers. The characterization and diagnosis is done to later define the design of the network and the relationships between the companies. It takes into account the feedback as a method of updating the conditions and way to focus the objectives according to the final customers. The MP-SCIM is the result of systematic work with a supply chain approach in companies that have consolidated as coordinators of their network. The cases of the edible oil chain and explosives for construction sector reflect results of more remarkable advances since they have applied this approach for more than 5 years and maintain it as a general strategy of successful development. The edible oil trading company experienced a jump in sales. In 2006, the company started the analysis in order to define the supply chain, apply diagnosis techniques, define problems and implement solutions. The involvement of the management and the progressive formation of performance capacities in the personnel allowed the application of tools according to the context. The company that coordinates the explosives chain for construction sector shows adequate training with independence and opportunity in the face of different situations and variations of their business environment. The appropriation of tools and techniques for the analysis and implementation of proposals is a characteristic feature of this case. The coordinating entity applies integrated supply chain management to its decisions based on the timely training of the necessary action capabilities for each situation. Other cases of study and application that validate these tools are also detailed in this paper, and they highlight the results of generalization in the quantitative and qualitative improvement according to the final clients. These cases are: teaching literature in universities, agricultural products of local scope and medicine supply chains.Keywords: integrated management, logistic system, supply chain management, tactical-operative planning
Procedia PDF Downloads 152134 Fire Safe Medical Oxygen Delivery for Aerospace Environments
Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl
Abstract:
Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption
Procedia PDF Downloads 102133 The Influence of Human Movement on the Formation of Adaptive Architecture
Authors: Rania Raouf Sedky
Abstract:
Adaptive architecture relates to buildings specifically designed to adapt to their residents and their environments. To design a biologically adaptive system, we can observe how living creatures in nature constantly adapt to different external and internal stimuli to be a great inspiration. The issue is not just how to create a system that is capable of change but also how to find the quality of change and determine the incentive to adapt. The research examines the possibilities of transforming spaces using the human body as an active tool. The research also aims to design and build an effective dynamic structural system that can be applied on an architectural scale and integrate them all into the creation of a new adaptive system that allows us to conceive a new way to design, build and experience architecture in a dynamic manner. The main objective was to address the possibility of a reciprocal transformation between the user and the architectural element so that the architecture can adapt to the user, as the user adapts to architecture. The motivation is the desire to deal with the psychological benefits of an environment that can respond and thus empathize with human emotions through its ability to adapt to the user. Adaptive affiliations of kinematic structures have been discussed in architectural research for more than a decade, and these issues have proven their effectiveness in developing kinematic structures, responsive and adaptive, and their contribution to 'smart architecture'. A wide range of strategies have been used in building complex kinetic and robotic systems mechanisms to achieve convertibility and adaptability in engineering and architecture. One of the main contributions of this research is to explore how the physical environment can change its shape to accommodate different spatial displays based on the movement of the user’s body. The main focus is on the relationship between materials, shape, and interactive control systems. The intention is to develop a scenario where the user can move, and the structure interacts without any physical contact. The soft form of shifting language and interaction control technology will provide new possibilities for enriching human-environmental interactions. How can we imagine a space in which to construct and understand its users through physical gestures, visual expressions, and response accordingly? How can we imagine a space whose interaction depends not only on preprogrammed operations but on real-time feedback from its users? The research also raises some important questions for the future. What would be the appropriate structure to show physical interaction with the dynamic world? This study concludes with a strong belief in the future of responsive motor structures. We imagine that they are developing the current structure and that they will radically change the way spaces are tested. These structures have obvious advantages in terms of energy performance and the ability to adapt to the needs of users. The research highlights the interface between remote sensing and a responsive environment to explore the possibility of an interactive architecture that adapts to and responds to user movements. This study ends with a strong belief in the future of responsive motor structures. We envision that it will improve the current structure and that it will bring a fundamental change to the way in which spaces are tested.Keywords: adaptive architecture, interactive architecture, responsive architecture, tensegrity
Procedia PDF Downloads 155132 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction
Authors: B. Godefroy, D. Beladjine, K. Beddiar
Abstract:
In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.Keywords: information system, interoperability, models for exploitation, property industry
Procedia PDF Downloads 144131 Creation of a Trust-Wide, Cross-Speciality, Virtual Teaching Programme for Doctors, Nurses and Allied Healthcare Professionals
Authors: Nelomi Anandagoda, Leanne J. Eveson
Abstract:
During the COVID-19 pandemic, the surge in in-patient admissions across the medical directorate of a district general hospital necessitated the implementation of an incident rota. Conscious of the impact on training and professional development, the idea of developing a virtual teaching programme was conceived. The programme initially aimed to provide junior doctors, specialist nurses, pharmacists, and allied healthcare professionals from medical specialties and those re-deployed from other specialties (e.g., ophthalmology, GP, surgery, psychiatry) the knowledge and skills to manage the deteriorating patient with COVID-19. The programme was later developed to incorporate the general internal medicine curriculum. To facilitate continuing medical education whilst maintaining social distancing during this period, a virtual platform was used to deliver teaching to junior doctors across two large district general hospitals and two community hospitals. Teaching sessions were recorded and uploaded to a common platform, providing a resource for participants to catch up on and re-watch teaching sessions, making strides towards reducing discrimination against the professional development of less than full-time trainees. Thus, creating a learning environment, which is inclusive and accessible to adult learners in a self-directed manner. The negative impact of the pandemic on the well-being of healthcare professionals is well documented. To support the multi-disciplinary team, the virtual teaching programme evolved to included sessions on well-being, resilience, and work-life balance. Providing teaching for learners across the multi-disciplinary team (MDT) has been an eye-opening experience. By challenging the concept that learners should only be taught within their own peer groups, the authors have fostered a greater appreciation of the strengths of the MDT and showcased the immense wealth of expertise available within the trust. The inclusive nature of the teaching and the ease of joining a virtual teaching session has facilitated the dissemination of knowledge across the MDT, thus improving patient care on the frontline. The weekly teaching programme has been running for over eight months, with ongoing engagement, interest, and participation. As described above, the teaching programme has evolved to accommodate the needs of its learners. It has received excellent feedback with an appreciation of its inclusive, multi-disciplinary, and holistic nature. The COVID-19 pandemic provided a catalyst to rapidly develop novel methods of working and training and widened access/exposure to the virtual technologies available to large organisations. By merging pedagogical expertise and technology, the authors have created an effective online learning environment. Although the authors do not propose to replace face-to-face teaching altogether, this model of virtual multidisciplinary team, cross-site teaching has proven to be a great leveler. It has made high-quality teaching accessible to learners of different confidence levels, grades, specialties, and working patterns.Keywords: cross-site, cross-speciality, inter-disciplinary, multidisciplinary, virtual teaching
Procedia PDF Downloads 168130 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces
Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur
Abstract:
In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.Keywords: aerodynamic, bi-dimensional, vegetation, synergistic
Procedia PDF Downloads 267129 Transducers for Measuring Displacements of Rotating Blades in Turbomachines
Authors: Pavel Prochazka
Abstract:
The study deals with transducers for measuring vibration displacements of rotating blade tips in turbomachines. In order to prevent major accidents with extensive economic consequences, it shows an urgent need for every low-pressure steam turbine stage being equipped with modern non-contact measuring system providing information on blade loading, damage and residual lifetime under operation. The requirement of measuring vibration and static characteristics of steam turbine blades, therefore, calls for the development and operational verification of both new types of sensors and measuring principles and methods. The task is really demanding: to measure displacements of blade tips with a resolution of the order of 10 μm by speeds up to 750 m/s, humidity 100% and temperatures up to 200 °C. While in gas turbines are used primarily capacitive and optical transducers, these transducers cannot be used in steam turbines. The reason is moisture vapor, droplets of condensing water and dirt, which disable the function of sensors. Therefore, the most feasible approach was to focus on research of electromagnetic sensors featuring promising characteristics for given blade materials in a steam environment. Following types of sensors have been developed and both experimentally and theoretically studied in the Institute of Thermodynamics, Academy of Sciences of the Czech Republic: eddy-current, Hall effect, inductive and magnetoresistive. Eddy-current transducers demand a small distance of 1 to 2 mm and change properties in the harsh environment of steam turbines. Hall effect sensors have relatively low sensitivity, high values of offset, drift, and especially noise. Induction sensors do not require any supply current and have a simple construction. The magnitude of the sensors output voltage is dependent on the velocity of the measured body and concurrently on the varying magnetic induction, and they cannot be used statically. Magnetoresistive sensors are formed by magnetoresistors arranged into a Wheatstone bridge. Supplying the sensor from a current source provides better linearity. The MR sensors can be used permanently for temperatures up to 200 °C at lower values of the supply current of about 1 mA. The frequency range of 0 to 300 kHz is by an order higher comparing to the Hall effect and induction sensors. The frequency band starts at zero frequency, which is very important because the sensors can be calibrated statically. The MR sensors feature high sensitivity and low noise. The symmetry of the bridge arrangement leads to a high common mode rejection ratio and suppressing disturbances, which is important, especially in industrial applications. The MR sensors feature high sensitivity, high common mode rejection ratio, and low noise, which is important, especially in industrial applications. Magnetoresistive transducers provide a range of excellent properties indicating their priority for displacement measurements of rotating blades in turbomachines.Keywords: turbines, blade vibration, blade tip timing, non-contact sensors, magnetoresistive sensors
Procedia PDF Downloads 128128 Multi-Modality Brain Stimulation: A Treatment Protocol for Tinnitus
Authors: Prajakta Patil, Yash Huzurbazar, Abhijeet Shinde
Abstract:
Aim: To develop a treatment protocol for the management of tinnitus through multi-modality brain stimulation. Methodology: Present study included 33 adults with unilateral (31 subjects) and bilateral (2 subjects) chronic tinnitus with and/or without hearing loss independent of their etiology. The Treatment protocol included 5 consecutive sessions with follow-up of 6 months. Each session was divided into 3 parts: • Pre-treatment: a) Informed consent b) Pitch and loudness matching. • Treatment: Bimanual paper pen task with tinnitus masking for 30 minutes. • Post-treatment: a) Pitch and loudness matching b) Directive counseling and obtaining feedback. Paper-pen task is to be performed bimanually that included carrying out two different writing activities in different context. The level of difficulty of the activities was increased in successive sessions. Narrowband noise of a frequency same as that of tinnitus was presented at 10 dBSL of tinnitus for 30 minutes simultaneously in the ear with tinnitus. Result: The perception of tinnitus was no longer present in 4 subjects while in remaining subjects it reduced to an intensity that its perception no longer troubled them without causing residual facilitation. In all subjects, the intensity of tinnitus decreased by an extent of 45 dB at an average. However, in few subjects, the intensity of tinnitus also decreased by more than 45 dB. The approach resulted in statistically significant reductions in Tinnitus Functional Index and Tinnitus Handicap Inventory scores. The results correlate with pre and post treatment score of Tinnitus Handicap Inventory that dropped from 90% to 0%. Discussion: Brain mapping(qEEG) Studies report that there is multiple parallel overlapping of neural subnetworks in the non-auditory areas of the brain which exhibits abnormal, constant and spontaneous neural activity involved in the perception of tinnitus with each subnetwork and area reflecting a specific aspect of tinnitus percept. The paper pen task and directive counseling are designed and delivered respectively in a way that is assumed to induce normal, rhythmically constant and premeditated neural activity and mask the abnormal, constant and spontaneous neural activity in the above-mentioned subnetworks and the specific non-auditory area. Counseling was focused on breaking the vicious cycle causing and maintaining the presence of tinnitus. Diverting auditory attention alone is insufficient to reduce the perception of tinnitus. Conscious awareness of tinnitus can be suppressed when individuals engage in cognitively demanding tasks of non-auditory nature as the paper pen task used in the present study. To carry out this task selective, divided, sustained, simultaneous and split attention act cumulatively. Bimanual paper pen task represents a top-down activity which underlies brain’s ability to selectively attend to the bimanual written activity as a relevant stimulus and to ignore tinnitus that is the irrelevant stimuli in the present study. Conclusion: The study suggests that this novel treatment approach is cost effective, time saving and efficient to vanish the tinnitus or to reduce the intensity of tinnitus to a negligible level and thereby eliminating the negative reactions towards tinnitus.Keywords: multi-modality brain stimulation, neural subnetworks, non-auditory areas, paper-pen task, top-down activity
Procedia PDF Downloads 146127 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework
Authors: Robert Pocklington
Abstract:
Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language
Procedia PDF Downloads 109