Search results for: static pressure wind tunnel testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8826

Search results for: static pressure wind tunnel testing

126 COVID-19’s Impact on the Use of Media, Educational Performance, and Learning in Children and Adolescents with ADHD Who Engaged in Virtual Learning

Authors: Christina Largent, Tazley Hobbs

Abstract:

Objective: A literature review was performed to examine the existing research on COVID-19 lockdown as it relates to ADHD child/adolescent individuals, media use, and impact on educational performance/learning. It was surmised that with the COVID-19 shut-down and transition to remote learning, a less structured learning environment, increased screen time, in addition to potential difficulty accessing school resources would impair ADHD individuals’ performance and learning. A resulting increase in the number of youths diagnosed and treated for ADHD would be expected. As of yet, there has been little to no published data on the incidence of ADHD as it relates to COVID-19 outside of reports from several nonprofit agencies such as CHADD (Children and Adults with Attention-Deficit/Hyperactivity Disorder ), who reported an increased number of calls to their helpline, The New York based Child Mind Institute, who reported an increased number of appointments to discuss medications, and research released from Athenahealth showing an increase in the number of patients receiving new diagnosis of ADHD and new prescriptions for ADHD medications. Methods: A literature search for articles published between 2020 and 2021 from Pubmed, Google Scholar, PsychInfo, was performed. Search phrases and keywords included “covid, adhd, child, impact, remote learning, media, screen”. Results: Studies primarily utilized parental reports, with very few from the perspective of the ADHD individuals themselves. Most findings thus far show that with the COVID-19 quarantine and transition to online learning, ADHD individuals’ experienced decreased ability to keep focused or adhere to the daily routine, as well as increased inattention-related problems, such as careless mistakes or lack of completion in homework, which in turn translated into overall more difficulty with remote learning. To add further injury, one study showed (just on evaluation of two different sites within the US) that school based services for these individuals decreased with the shift to online-learning. Increased screen time, television, social media, and gaming were noted amongst ADHD individuals. One study further differentiated the degree of digital media, identifying individuals with “problematic “ or “non-problematic” use. ADHD children with problematic digital media use suffered from more severe core symptoms of ADHD, negative emotions, executive function deficits, damage to family environment, pressure from life events, and a lower motivation to learn. Conclusions and Future Considerations: Studies found not only was online learning difficult for ADHD individuals but it, in addition to greater use of digital media, was associated with worsening ADHD symptoms impairing schoolwork, in addition to secondary findings of worsening mood and behavior. Currently, data on the number of new ADHD cases, in addition to data on the prescription and usage of stimulants during COVID-19, has not been well documented or studied; this would be well-warranted out of concern for over diagnosing or over-prescribing our youth. It would also be well-worth studying how reversible or long-lasting these negative impacts may be.

Keywords: COVID-19, remote learning, media use, ADHD, child, adolescent

Procedia PDF Downloads 112
125 Production, Characterisation, and in vitro Degradation and Biocompatibility of a Solvent-Free Polylactic-Acid/Hydroxyapatite Composite for 3D-Printed Maxillofacial Bone-Regeneration Implants

Authors: Carlos Amnael Orozco-Diaz, Robert David Moorehead, Gwendolen Reilly, Fiona Gilchrist, Cheryl Ann Miller

Abstract:

The current gold-standard for maxillofacial reconstruction surgery (MRS) utilizes auto-grafted cancellous bone as a filler. This study was aimed towards developing a polylactic-acid/hydroxyapatite (PLA-HA) composite suitable for fused-deposition 3D printing. Functionalization of the polymer through the addition of HA was directed to promoting bone-regeneration properties so that the material can rival the performance of cancellous bone grafts in terms of bone-lesion repair. This kind of composite enables the production of MRS implants based off 3D-reconstructions from image studies – namely computed tomography – for anatomically-correct fitting. The present study encompassed in-vitro degradation and in-vitro biocompatibility profiling for 3D-printed PLA and PLA-HA composites. PLA filament (Verbatim Co.) and Captal S hydroxyapatite micro-scale HA powder (Plasma Biotal Ltd) were used to produce PLA-HA composites at 5, 10, and 20%-by-weight HA concentration. These were extruded into 3D-printing filament, and processed in a BFB-3000 3D-Printer (3D Systems Co.) into tensile specimens, and were mechanically challenged as per ASTM D638-03. Furthermore, tensile specimens were subjected to accelerated degradation in phosphate-buffered saline solution at 70°C for 23 days, as per ISO-10993-13-2010. This included monitoring of mass loss (through dry-weighing), crystallinity (through thermogravimetric analysis/differential thermal analysis), molecular weight (through gel-permeation chromatography), and tensile strength. In-vitro biocompatibility analysis included cell-viability and extracellular matrix deposition, which were performed both on flat surfaces and on 3D-constructs – both produced through 3D-printing. Discs of 1 cm in diameter and cubic 3D-meshes of 1 cm3 were 3D printed in PLA and PLA-HA composites (n = 6). The samples were seeded with 5000 MG-63 osteosarcoma-like cells, with cell viability extrapolated throughout 21 days via resazurin reduction assays. As evidence of osteogenicity, collagen and calcium deposition were indirectly estimated through Sirius Red staining and Alizarin Red staining respectively. Results have shown that 3D printed PLA loses structural integrity as early as the first day of accelerated degradation, which was significantly faster than the literature suggests. This was reflected in the loss of tensile strength down to untestable brittleness. During degradation, mass loss, molecular weight, and crystallinity behaved similarly to results found in similar studies for PLA. All composite versions and pure PLA were found to perform equivalent to tissue-culture plastic (TCP) in supporting the seeded-cell population. Significant differences (p = 0.05) were found on collagen deposition for higher HA concentrations, with composite samples performing better than pure PLA and TCP. Additionally, per-cell-calcium deposition on the 3D-meshes was significantly lower when comparing 3D-meshes to discs of the same material (p = 0.05). These results support the idea that 3D-printable PLA-HA composites are a viable resorbable material for artificial grafts for bone-regeneration. Degradation data suggests that 3D-printing of these materials – as opposed to other manufacturing methods – might result in faster resorption than currently-used PLA implants.

Keywords: bone regeneration implants, 3D-printing, in vitro testing, biocompatibility, polymer degradation, polymer-ceramic composites

Procedia PDF Downloads 132
124 Reverse Logistics Network Optimization for E-Commerce

Authors: Albert W. K. Tan

Abstract:

This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.

Keywords: reverse logistics, supply chain management, optimization, e-commerce

Procedia PDF Downloads 16
123 CLOUD Japan: Prospective Multi-Hospital Study to Determine the Population-Based Incidence of Hospitalized Clostridium difficile Infections

Authors: Kazuhiro Tateda, Elisa Gonzalez, Shuhei Ito, Kirstin Heinrich, Kevin Sweetland, Pingping Zhang, Catia Ferreira, Michael Pride, Jennifer Moisi, Sharon Gray, Bennett Lee, Fred Angulo

Abstract:

Clostridium difficile (C. difficile) is the most common cause of antibiotic-associated diarrhea and infectious diarrhea in healthcare settings. Japan has an aging population; the elderly are at increased risk of hospitalization, antibiotic use, and C. difficile infection (CDI). Little is known about the population-based incidence and disease burden of CDI in Japan although limited hospital-based studies have reported a lower incidence than the United States. To understand CDI disease burden in Japan, CLOUD (Clostridium difficile Infection Burden of Disease in Adults in Japan) was developed. CLOUD will derive population-based incidence estimates of the number of CDI cases per 100,000 population per year in Ota-ku (population 723,341), one of the districts in Tokyo, Japan. CLOUD will include approximately 14 of the 28 Ota-ku hospitals including Toho University Hospital, which is a 1,000 bed tertiary care teaching hospital. During the 12-month patient enrollment period, which is scheduled to begin in November 2018, Ota-ku residents > 50 years of age who are hospitalized at a participating hospital with diarrhea ( > 3 unformed stools (Bristol Stool Chart 5-7) in 24 hours) will be actively ascertained, consented, and enrolled by study surveillance staff. A stool specimen will be collected from enrolled patients and tested at a local reference laboratory (LSI Medience, Tokyo) using QUIK CHEK COMPLETE® (Abbott Laboratories). which simultaneously tests specimens for the presence of glutamate dehydrogenase (GDH) and C. difficile toxins A and B. A frozen stool specimen will also be sent to the Pfizer Laboratory (Pearl River, United States) for analysis using a two-step diagnostic testing algorithm that is based on detection of C. difficile strains/spores harboring toxin B gene by PCR followed by detection of free toxins (A and B) using a proprietary cell cytotoxicity neutralization assay (CCNA) developed by Pfizer. Positive specimens will be anaerobically cultured, and C. difficile isolates will be characterized by ribotyping and whole genomic sequencing. CDI patients enrolled in CLOUD will be contacted weekly for 90 days following diarrhea onset to describe clinical outcomes including recurrence, reinfection, and mortality, and patient reported economic, clinical and humanistic outcomes (e.g., health-related quality of life, worsening of comorbidities, and patient and caregiver work absenteeism). Studies will also be undertaken to fully characterize the catchment area to enable population-based estimates. The 12-month active ascertainment of CDI cases among hospitalized Ota-ku residents with diarrhea in CLOUD, and the characterization of the Ota-ku catchment area, including estimation of the proportion of all hospitalizations of Ota-ku residents that occur in the CLOUD-participating hospitals, will yield CDI population-based incidence estimates, which can be stratified by age groups, risk groups, and source (hospital-acquired or community-acquired). These incidence estimates will be extrapolated, following age standardization using national census data, to yield CDI disease burden estimates for Japan. CLOUD also serves as a model for studies in other countries that can use the CLOUD protocol to estimate CDI disease burden.

Keywords: Clostridium difficile, disease burden, epidemiology, study protocol

Procedia PDF Downloads 234
122 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research

Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova

Abstract:

The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.

Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research

Procedia PDF Downloads 368
121 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 248
120 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 137
119 Implementation of Building Information Modelling to Monitor, Assess, and Control the Indoor Environmental Quality of Higher Education Buildings

Authors: Mukhtar Maigari

Abstract:

The landscape of Higher Education (HE) institutions, especially following the CVID-19 pandemic, necessitates advanced approaches to manage Indoor Environmental Quality (IEQ) which is crucial for the comfort, health, and productivity of students and staff. This study investigates the application of Building Information Modelling (BIM) as a multifaceted tool for monitoring, assessing, and controlling IEQ in HE buildings aiming to bridge the gap between traditional management practices and the innovative capabilities of BIM. Central to the study is a comprehensive literature review, which lays the foundation by examining current knowledge and technological advancements in both IEQ and BIM. This review sets the stage for a deeper investigation into the practical application of BIM in IEQ management. The methodology consists of Post-Occupancy Evaluation (POE) which encompasses physical monitoring, questionnaire surveys, and interviews under the umbrella of case studies. The physical data collection focuses on vital IEQ parameters such as temperature, humidity, CO2 levels etc, conducted by using different equipment including dataloggers to ensure accurate data. Complementing this, questionnaire surveys gather perceptions and satisfaction levels from students, providing valuable insights into the subjective aspects of IEQ. The interview component, targeting facilities management teams, offers an in-depth perspective on IEQ management challenges and strategies. The research delves deeper into the development of a conceptual BIM-based framework, informed by the insight findings from case studies and empirical data. This framework is designed to demonstrate the critical functions necessary for effective IEQ monitoring, assessment, control and automation with real time data handling capabilities. This BIM-based framework leads to the developing and testing a BIM-based prototype tool. This prototype leverages on software such as Autodesk Revit with its visual programming tool i.e., Dynamo and an Arduino-based sensor network thereby allowing for real-time flow of IEQ data for monitoring, control and even automation. By harnessing the capabilities of BIM technology, the study presents a forward-thinking approach that aligns with current sustainability and wellness goals, particularly vital in the post-COVID-19 era. The integration of BIM in IEQ management promises not only to enhance the health, comfort, and energy efficiency of educational environments but also to transform them into more conducive spaces for teaching and learning. Furthermore, this research could influence the future of HE buildings by prompting universities and government bodies to revaluate and improve teaching and learning environments. It demonstrates how the synergy between IEQ and BIM can empower stakeholders to monitor IEQ conditions more effectively and make informed decisions in real-time. Moreover, the developed framework has broader applications as well; it can serve as a tool for other sustainability assessments, like energy analysis in HE buildings, leveraging measured data synchronized with the BIM model. In conclusion, this study bridges the gap between theoretical research and real-world application by practicalizing how advanced technologies like BIM can be effectively integrated to enhance environmental quality in educational institutions. It portrays the potential of integrating advanced technologies like BIM in the pursuit of improved environmental conditions in educational institutions.

Keywords: BIM, POE, IEQ, HE-buildings

Procedia PDF Downloads 29
118 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework

Authors: Robert Pocklington

Abstract:

Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.

Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language

Procedia PDF Downloads 86
117 Anajaa-Visual Substitution System: A Navigation Assistive Device for the Visually Impaired

Authors: Juan Pablo Botero Torres, Alba Avila, Luis Felipe Giraldo

Abstract:

Independent navigation and mobility through unknown spaces pose a challenge for the autonomy of visually impaired people (VIP), who have relied on the use of traditional assistive tools like the white cane and trained dogs. However, emerging visually assistive technologies (VAT) have proposed several human-machine interfaces (HMIs) that could improve VIP’s ability for self-guidance. Hereby, we introduce the design and implementation of a visually assistive device, Anajaa – Visual Substitution System (AVSS). This system integrates ultrasonic sensors with custom electronics, and computer vision models (convolutional neural networks), in order to achieve a robust system that acquires information of the surrounding space and transmits it to the user in an intuitive and efficient manner. AVSS consists of two modules: the sensing and the actuation module, which are fitted to a chest mount and belt that communicate via Bluetooth. The sensing module was designed for the acquisition and processing of proximity signals provided by an array of ultrasonic sensors. The distribution of these within the chest mount allows an accurate representation of the surrounding space, discretized in three different levels of proximity, ranging from 0 to 6 meters. Additionally, this module is fitted with an RGB-D camera used to detect potentially threatening obstacles, like staircases, using a convolutional neural network specifically trained for this purpose. Posteriorly, the depth data is used to estimate the distance between the stairs and the user. The information gathered from this module is then sent to the actuation module that creates an HMI, by the means of a 3x2 array of vibration motors that make up the tactile display and allow the system to deliver haptic feedback. The actuation module uses vibrational messages (tactones); changing both in amplitude and frequency to deliver different awareness levels according to the proximity of the obstacle. This enables the system to deliver an intuitive interface. Both modules were tested under lab conditions, and the HMI was additionally tested with a focal group of VIP. The lab testing was conducted in order to establish the processing speed of the computer vision algorithms. This experimentation determined that the model can process 0.59 frames per second (FPS); this is considered as an adequate processing speed taking into account that the walking speed of VIP is 1.439 m/s. In order to test the HMI, we conducted a focal group composed of two females and two males between the ages of 35-65 years. The subject selection was aided by the Colombian Cooperative of Work and Services for the Sightless (COOTRASIN). We analyzed the learning process of the haptic messages throughout five experimentation sessions using two metrics: message discrimination and localization success. These correspond to the ability of the subjects to recognize different tactones and locate them within the tactile display. Both were calculated as the mean across all subjects. Results show that the focal group achieved message discrimination of 70% and a localization success of 80%, demonstrating how the proposed HMI leads to the appropriation and understanding of the feedback messages, enabling the user’s awareness of its surrounding space.

Keywords: computer vision on embedded systems, electronic trave aids, human-machine interface, haptic feedback, visual assistive technologies, vision substitution systems

Procedia PDF Downloads 55
116 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 315
115 Low- and High-Temperature Methods of CNTs Synthesis for Medicine

Authors: Grzegorz Raniszewski, Zbigniew Kolacinski, Lukasz Szymanski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza

Abstract:

One of the most promising area for carbon nanotubes (CNTs) application is medicine. One of the most devastating diseases is cancer. Carbon nanotubes may be used as carriers of a slowly released drug. It is possible to use of electromagnetic waves to destroy cancer cells by the carbon nanotubes (CNTs). In our research we focused on thermal ablation by ferromagnetic carbon nanotubes (Fe-CNTs). In the cancer cell hyperthermia functionalized carbon nanotubes are exposed to radio frequency electromagnetic field. Properly functionalized Fe-CNTs join the cancer cells. Heat generated in nanoparticles connected to nanotubes warm up nanotubes and then the target tissue. When the temperature in tumor tissue exceeds 316 K the necrosis of cancer cells may be observed. Several techniques can be used for Fe-CNTs synthesis. In our work, we use high-temperature methods where arc-discharge is applied. Low-temperature systems are microwave plasma with assisted chemical vapor deposition (MPCVD) and hybrid physical-chemical vapor deposition (HPCVD). In the arc discharge system, the plasma reactor works with a pressure of He up to 0,5 atm. The electric arc burns between two graphite rods. Vapors of carbon move from the anode, through a short arc column and forms CNTs which can be collected either from the reactor walls or cathode deposit. This method is suitable for the production of multi-wall and single-wall CNTs. A disadvantage of high-temperature methods is a low purification, short length, random size and multi-directional distribution. In MPCVD system plasma is generated in waveguide connected to the microwave generator. Then containing carbon and ferromagnetic elements plasma flux go to the quartz tube. The additional resistance heating can be applied to increase the reaction effectiveness and efficiency. CNTs nucleation occurs on the quartz tube walls. It is also possible to use substrates to improve carbon nanotubes growth. HPCVD system involves both chemical decomposition of carbon containing gases and vaporization of a solid or liquid source of catalyst. In this system, a tube furnace is applied. A mixture of working and carbon-containing gases go through the quartz tube placed inside the furnace. As a catalyst ferrocene vapors can be used. Fe-CNTs may be collected then either from the quartz tube walls or on the substrates. Low-temperature methods are characterized by higher purity product. Moreover, carbon nanotubes from tested CVD systems were partially filled with the iron. Regardless of the method of Fe-CNTs synthesis the final product always needs to be purified for applications in medicine. The simplest method of purification is an oxidation of the amorphous carbon. Carbon nanotubes dedicated for cancer cell thermal ablation need to be additionally treated by acids for defects amplification on the CNTs surface what facilitates biofunctionalization. Application of ferromagnetic nanotubes for cancer treatment is a promising method of fighting with cancer for the next decade. Acknowledgment: The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013

Keywords: arc discharge, cancer, carbon nanotubes, CVD, thermal ablation

Procedia PDF Downloads 424
114 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine

Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski

Abstract:

The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization

Procedia PDF Downloads 363
113 Eco-Friendly Cultivation

Authors: Shah Rucksana Akhter Urme

Abstract:

Agriculture is the main source of food for human consumption and feeding the world huge population, the pressure of food supply is increasing day by day. Undoubtedly, quality strain, improved plantation, farming technology, synthetic fertilizer, readily available irrigation, insecticides and harvesting technology are the main factors those to meet up the huge demand of food consumption all over the world. However, depended on this limited resources and excess amount of consuming lands, water, fertilizers leads to the end of the resources and severe climate effects has been left for our future generation. Agriculture is the most responsible to global warming, emitting more greenhouse gases than all other vehicles largely from nitrous oxide released by from fertilized fields, and carbon dioxide from the cutting of rain forests to grow crops . Farming is the thirstiest user of our precious water supplies and a major polluter, as runoff from fertilizers disrupts fragile lakes, rivers, and coastal ecosystems across the globe which accelerates the loss of biodiversity, crucial habitat and a major driver of wildlife extinction. It is needless to say that we have to more concern on how we can save the nutrients of the soil, storage of the water and avoid excessive depends on synthetic fertilizer and insecticides. In this case, eco- friendly cultivation could be a potential alternative solution to minimize effects of agriculture in our environment. The objective of this review paper is about organic cultivation following in particular biotechnological process focused on bio-fertilizer and bio-pesticides. Intense practice of chemical pesticides, insecticides has severe effect on both in human life and biodiversity. This cultivation process introduces farmer an alternative way which is nonhazardous, cost effective and ecofriendly. Organic fertilizer such as tea residue, ashes might be the best alternative to synthetic fertilizer those play important role in increasing soil nutrient and fertility. Ashes contain different essential and non-essential mineral contents that are required for plant growth. Organic pesticide such as neem spray is beneficial for crop as it is toxic for pest and insects. Recycled and composted crop wastes and animal manures, crop rotation, green manures and legumes etc. are suitable for soil fertility which is free from hazardous chemicals practice. Finally water hyacinth and algae are potential source of nutrients even alternative to soil for cultivation along with storage of water for continuous supply. Inorganic practice of agriculture, consuming fruits and vegetables becomes a threat for both human life and eco-system and synthetic fertilizer and pesticides are responsible for it. Farmers that practice eco-friendly farming have to implement steps to protect the environment, particularly by severely limiting the use of pesticides and avoiding the use of synthetic chemical fertilizers, which are necessary for organic systems to experience reduced environmental harm and health risk.

Keywords: organic farming, biopesticides, organic nutrients, water storage, global warming

Procedia PDF Downloads 37
112 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 169
111 Geographic Mapping of Tourism in Rural Areas: A Case Study of Cumbria, United Kingdom

Authors: Emma Pope, Demos Parapanos

Abstract:

Rural tourism has become more obvious and prevalent, with tourists’ increasingly seeking authentic experiences. This movement accelerated post-Covid, putting destinations in danger of reaching levels of saturation called ‘overtourism’. Whereas the phenomenon of overtourism has been frequently discussed in the urban context by academics and practitioners over recent years, it has hardly been referred to in the context of rural tourism, where perhaps it is even more difficult to manage. Rural tourism was historically considered small-scale, marked by its traditional character and by having little impact on nature and rural society. The increasing number of rural areas experiencing overtourism, however, demonstrates the need for new approaches, especially as the impacts and enablers of overtourism are context specific. Cumbria, with approximately 47 million visitors each year, and 23,000 operational enterprises, is one of these rural areas experiencing overtourism in the UK. Using the county of Cumbria as an example, this paper aims to explore better planning and management in rural destinations by clustering the area into rural and ‘urban-rural’ tourism zones. To achieve the aim, this study uses secondary data from a variety of sources to identify variables relating to visitor economy development and demand. These data include census data relating to population and employment, tourism industry-specific data including tourism revenue, visitor activities, and accommodation stock, and big data sources such as Trip Advisor and All Trails. The combination of these data sources provides a breadth of tourism-related variables. The subsequent analysis of this data draws upon various validated models. For example, tourism and hospitality employment density, territorial tourism pressure, and accommodation density. In addition to these statistical calculations, other data are utilized to further understand the context of these zones, for example, tourist services, attractions, and activities. The data was imported into ARCGIS where the density of the different variables is visualized on maps. This study aims to provide an understanding of the geographical context of visitor economy development and tourist behavior in rural areas. The findings contribute to an understanding of the spatial dynamics of tourism within the region of Cumbria through the creation of thematized maps. Different zones of tourism industry clusters are identified, which include elements relating to attractions, enterprises, infrastructure, tourism employment and economic impact. These maps visualize hot and cold spots relating to a variety of tourism contexts. It is believed that the strategy used to provide a visual overview of tourism development and demand in Cumbria could provide a strategic tool for rural areas to better plan marketing opportunities and avoid overtourism. These findings can inform future sustainability policy and destination management strategies within the areas through an understanding of the processes behind the emergence of both hot and cold spots. It may mean that attract and disperse needs to be reviewed in terms of a strategic option. In other words, to use sector or zonal policies for the individual hot or cold areas with transitional zones dependent upon local economic, social and environmental factors.

Keywords: overtourism, rural tourism, sustainable tourism, tourism planning, tourism zones

Procedia PDF Downloads 52
110 Catalytic Decomposition of Formic Acid into H₂/CO₂ Gas: A Distinct Approach

Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy

Abstract:

Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of the biomass platform, comprising a potential pool of hydrogen energy that stands as a distinct energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need for in-situ H₂ production, which plays a key role in the hydrogenation reactions of biomass into higher-value components. It is reported elsewhere in the literature that catalytic decomposition of FA is usually performed in poorly designed setups using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. Our work suggests an approach that integrates designing a distinct catalyst featuring magnetic properties with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for the dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H₂ gas from FA. Using an ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under an inert medium. Through a distinct approach, FA is charged into the reactor via a high-pressure positive displacement pump at steady-state conditions. The produced gas (H₂+CO₂) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The uniqueness of this work lies in designing a very responsive catalyst, pumping a consistent amount of FA into a sealed reactor running at steady-state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at a lower temperature range (35-50°C) yielded more gas, while the catalyst loading and Pd doping wt.% were found to be the most significant factors with P-values 0.026 & 0.031, respectively.

Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles

Procedia PDF Downloads 25
109 Effects of Applying Low-Dye Taping in Performing Double-Leg Squat on Electromyographic Activity of Lower Extremity Muscles for Collegiate Basketball Players with Excessive Foot Pronation

Authors: I. M. K. Ho, S. K. Y. Chan, K. H. P. Lam, G. M. W. Tong, N. C. Y. Yeung, J. T. C. Luk

Abstract:

Low-dye taping (LDT) is commonly used for treating foot problems, such as plantar fasciitis, and supporting foot arch for runners and non-athletes patients with pes planus. The potential negative impact of pronated feet leading to tibial and femoral internal rotation via the entire kinetic chain reaction was postulated and identified. The changed lower limb biomechanics potentially leading to poor activation of hip and knee stabilizers, such as gluteus maximus and medius, may associate with higher risk of knee injuries including patellofemoral pain syndrome and ligamentous sprain in many team sports players. It is therefore speculated that foot arch correction with LDT might enhance the use of gluteal muscles. The purpose of this study was to investigate the effect of applying LDT on surface electromyographic (sEMG) activity of superior gluteus maximus (SGMax), inferior gluteus maximus (IGMax), gluteus medius (GMed) and tibialis anterior (TA) during double-leg squat. 12 male collegiate basketball players (age: 21.72.5 years; body fat: 12.43.6%; navicular drop: 13.72.7mm) with at least three years regular basketball training experience participated in this study. Participants were excluded if they had recent history of lower limb injuries, over 16.6% body fat and lesser than 10mm drop in navicular drop (ND) test. Recruited subjects visited the laboratory once for the within-subject crossover study. Maximum voluntary isometric contraction (MVIC) tests on all selected muscles were performed in randomized order followed by sEMG test on double-leg squat during LDT and non-LDT conditions in counterbalanced order. SGMax, IGMax, GMed and TA activities during the entire 2-second concentric and 2-second eccentric phases were normalized and interpreted as %MVIC. The magnitude of the difference between taped and non-taped conditions of each muscle was further assessed via standardized effect90% confidence intervals (CI) with non-clinical magnitude-based inference. Paired samples T-test showed a significant decrease (4.71.4mm) in ND (95% CI: 3.8, 5.6; p < 0.05) while no significant difference was observed between taped and non-taped conditions in sEMG tests for all muscles and contractions (p > 0.05). On top of traditional significant testing, magnitude-based inference showed possibly increase in IGMax activity (small standardized effect: 0.270.44), likely increase in GMed activity (small standardized effect: 0.340.34) and possibly increase in TA activity (small standardized effect: 0.220.29) during eccentric phase. It is speculated that the decrease of navicular drop supported by LDT application could potentially enhance the use of inferior gluteus maximus and gluteus medius especially during eccentric phase in this study. As the eccentric phase of double-leg squat is an important component of landing activities in basketball, further studies on the onset and amount of gluteal activation during jumping and landing activities with LDT are recommended. Since both hip and knee kinematics were not measured in this study, the underlying cause of the observed increase in gluteal activation during squat after LDT is inconclusive. In this regard, the investigation of relationships between LDT application, ND, hip and knee kinematics, and gluteal muscle activity during sports specific jumping and landing tasks should be focused in the future.

Keywords: flat foot, gluteus maximus, gluteus medius, injury prevention

Procedia PDF Downloads 135
108 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China

Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding

Abstract:

The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.

Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2

Procedia PDF Downloads 292
107 Thinking Lean in ICU: A Time Motion Study Quantifying ICU Nurses’ Multitasking Time Allocation

Authors: Fatma Refaat Ahmed, PhD, RN. Assistant Professor, Department of Nursing, College of Health Sciences, University of Sharjah, UAE. ([email protected]). Sally Mohamed Farghaly, Nursing Administration Department, Faculty of Nursing, Alexandria University, Alexandria, Egypt. ([email protected])

Abstract:

Context: Intensive care unit (ICU) nurses often face pressure and constraints in their work, leading to the rationing of care when demands exceed available time and resources. Observations suggest that ICU nurses are frequently distracted from their core nursing roles by non-core tasks. This study aims to provide evidence on ICU nurses' multitasking activities and explore the association between nurses' personal and clinical characteristics and their time allocation. Research Aim: The aim of this study is to quantify the time spent by ICU nurses on multitasking activities and investigate the relationship between their personal and clinical characteristics and time allocation. Methodology: A self-observation form utilizing the "Diary" recording method was used to record the number of tasks performed by ICU nurses and the time allocated to each task category. Nurses also reported on the distractions encountered during their nursing activities. A convenience sample of 60 ICU nurses participated in the study, with each nurse observed for one nursing shift (6 hours), amounting to a total of 360 hours. The study was conducted in two ICUs within a university teaching hospital in Alexandria, Egypt. Findings: The results showed that ICU nurses completed 2,730 direct patient-related tasks and 1,037 indirect tasks during the 360-hour observation period. Nurses spent an average of 33.65 minutes on ventilator care-related tasks, 14.88 minutes on tube care-related tasks, and 10.77 minutes on inpatient care-related tasks. Additionally, nurses spent an average of 17.70 minutes on indirect care tasks per hour. The study identified correlations between nursing time and nurses' personal and clinical characteristics. Theoretical Importance: This study contributes to the existing research on ICU nurses' multitasking activities and their relationship with personal and clinical characteristics. The findings shed light on the significant time spent by ICU nurses on direct care for mechanically ventilated patients and the distractions that require attention from ICU managers. Data Collection: Data were collected using self-observation forms completed by participating ICU nurses. The forms recorded the number of tasks performed, the time allocated to each task category, and any distractions encountered during nursing activities. Analysis Procedures: The collected data were analyzed to quantify the time spent on different tasks by ICU nurses. Correlations were also examined between nursing time and nurses' personal and clinical characteristics. Question Addressed: This study addressed the question of how ICU nurses allocate their time across multitasking activities and whether there is an association between nurses' personal and clinical characteristics and time allocation. Conclusion: The findings of this study emphasize the need for a lean evaluation of ICU nurses' activities to identify and address potential gaps in patient care and distractions. Implementing lean techniques can improve efficiency, safety, clinical outcomes, and satisfaction for both patients and nurses, ultimately enhancing the quality of care and organizational performance in the ICU setting.

Keywords: motion study, ICU nurse, lean, nursing time, multitasking activities

Procedia PDF Downloads 43
106 Advancing UAV Operations with Hybrid Mobile Network and LoRa Communications

Authors: Annika J. Meyer, Tom Piechotta

Abstract:

Unmanned Aerial Vehicles (UAVs) have increasingly become vital tools in various applications, including surveillance, search and rescue, and environmental monitoring. One common approach to ensure redundant communication systems when flying beyond visual line of sight is for UAVs to employ multiple mobile data modems by different providers. Although widely adopted, this approach suffers from several drawbacks, such as high costs, added weight and potential increases in signal interference. In light of these challenges, this paper proposes a communication framework intermeshing mobile networks and LoRa (Long Range) technology—a low-power, long-range communication protocol. LoRaWAN (Long Range Wide Area Network) is commonly used in Internet of Things applications, relying on stationary gateways and Internet connectivity. This paper, however, utilizes the underlying LoRa protocol, taking advantage of the protocol’s low power and long-range capabilities while ensuring efficiency and reliability. Conducted in collaboration with the Potsdam Fire Department, the implementation of mobile network technology in combination with the LoRa protocol in small UAVs (take-off weight < 0.4 kg), specifically designed for search and rescue and area monitoring missions, is explored. This research aims to test the viability of LoRa as an additional redundant communication system during UAV flights as well as its intermeshing with the primary, mobile network-based controller. The methodology focuses on direct UAV-to-UAV and UAV-to-ground communications, employing different spreading factors optimized for specific operational scenarios—short-range for UAV-to-UAV interactions and long-range for UAV-to-ground commands. This explored use case also dramatically reduces one of the major drawbacks of LoRa communication systems, as a line of sight between the modules is necessary for reliable data transfer. Something that UAVs are uniquely suited to provide, especially when deployed as a swarm. Additionally, swarm deployment may enable UAVs that have lost contact with their primary network to reestablish their connection through another, better-situated UAV. The experimental setup involves multiple phases of testing, starting with controlled environments to assess basic communication capabilities and gradually advancing to complex scenarios involving multiple UAVs. Such a staged approach allows for meticulous adjustment of parameters and optimization of the communication protocols to ensure reliability and effectiveness. Furthermore, due to the close partnership with the Fire Department, the real-world applicability of the communication system is assured. The expected outcomes of this paper include a detailed analysis of LoRa's performance as a communication tool for UAVs, focusing on aspects such as signal integrity, range, and reliability under different environmental conditions. Additionally, the paper seeks to demonstrate the cost-effectiveness and operational efficiency of using a single type of communication technology that reduces UAV payload and power consumption. By shifting from traditional cellular network communications to a more robust and versatile cellular and LoRa-based system, this research has the potential to significantly enhance UAV capabilities, especially in critical applications where reliability is paramount. The success of this paper could pave the way for broader adoption of LoRa in UAV communications, setting a new standard for UAV operational communication frameworks.

Keywords: LoRa communication protocol, mobile network communication, UAV communication systems, search and rescue operations

Procedia PDF Downloads 21
105 Integration of Rapid Generation Technology in Pulse Crop Breeding

Authors: Saeid H. Mobini, Monika Lulsdorf, Thomas D. Warkentin

Abstract:

The length of the breeding cycle from seed to seed is a limiting factor in the development of improved homozygous lines for breeding or recombinant inbred lines (RILs) for genetic analysis. The objective of this research was to accelerate the production of field pea RILs through application of rapid generation technology (RGT). RGT is based on the principle of growing miniature plants in an artificial medium under controlled conditions, and allowing them to produce a few flowers which develop seeds that are harvested prior to normal seed maturity. We aimed to maintain population size and genetic diversity in regeneration cycles. The effects of flurprimidol (a gibberellin synthesis inhibitor), plant density, hydroponic system, scheduled fertilizer applications, artificial light spectrum, photoperiod, and light/dark temperature were evaluated in the development of RILs from a cross between cultivars CDC Dakota and CDC Amarillo. The main goal was to accelerate flowering while reducing maintenance and space costs. In addition, embryo rescue of immature seeds was tested for shortening the seed fill period. Data collected over seven generations included plant height, the percentage of plant survival, flowering rate, seed setting rate, the number of seeds per plant, and time from seed to seed. Applying 0.6 µM flurprimidol reduced the internode length. Plant height was decreased to approximately 32 cm allowing for higher plant density without a delay in flowering and seed setting rate. The three light systems (T5 fluorescent bulbs, LEDs, and High Pressure Sodium +Metal-halide lamp) evaluated did not differ significantly in terms of flowering time in field pea. Collectively, the combination of 0.6 µM flurprimidol, 217 plant. m-2, 20 h photoperiod, 21/16 oC light/dark temperature in a hydroponic system with vermiculite substrate, applying scheduled fertilizer application based on growth stage, and 500 µmole.m-2.s-1 light intensity using T5 bulbs resulted in 100% of plants flowering within 34 ± 3 days and 96.5% of plants completed seed setting in 68.2 ± 3.6 days, i.e., 30-45 days/generation faster than conventional single seed descent (SSD) methods. These regeneration cycles were reproducible consistently. Hence, RGT could double (5.3) generations per year, using 3% occupying space, compared to SSD (2-3 generation/year). Embryo rescue of immature seeds at 7-8 mm stage, using commercial fertilizer solutions (Holland’s Secret™) showed seed setting rate of 95%, while younger embryos had lower germination rate. Mature embryos had a seed setting rate of 96.5% without either hormones or sugar added. So, considering the higher cost of embryo rescue using a procedure which requires skill, additional materials, and expenses, it could be removed from RGT with a further cost saving, and the process could be stopped between generations if required.

Keywords: field pea, flowering, rapid regeneration, recombinant inbred lines, single seed descent

Procedia PDF Downloads 342
104 Increasing Student Engagement through Culturally-Responsive Classroom Management

Authors: Catherine P. Bradshaw, Elise T. Pas, Katrina J. Debnam, Jessika H. Bottiani, Michael Rosenberg

Abstract:

Worldwide, ethnically and culturally diverse students are at increased risk for school failure, discipline problems, and dropout. Despite decades of concern about this issue of disparities in education and other fields (e.g., 'school to prison pipeline'), there has been limited empirical examination of models that can actually reduce these gaps in schools. Moreover, few studies have examined the effectiveness of in-service teacher interventions and supports specifically designed to reduce discipline disparities and improve student engagement. This session provides an overview of the evidence-based Double Check model which serves as a framework for teachers to use culturally-responsive strategies to engage ethnically and culturally diverse students in the classroom and reduce discipline problems. Specifically, Double Check is a school-based prevention program which includes three core components: (a) enhancements to the school-wide Positive Behavioral Interventions and Supports (PBIS) tier-1 level of support; (b) five one-hour professional development training sessions, each of which addresses five domains of cultural competence (i.e., connection to the curriculum, authentic relationships, reflective thinking, effective communication, and sensitivity to students’ culture); and (c) coaching of classroom teachers using an adapted version of the Classroom Check-Up, which intends to increase teachers’ use of effective classroom management and culturally-responsive strategies using research-based motivational interviewing and data-informed problem-solving approaches. This paper presents findings from a randomized controlled trial (RCT) testing the impact of Double Check, on office discipline referrals (disaggregated by race) and independently observed and self-reported culturally-responsive practices and classroom behavior management. The RCT included 12 elementary and middle schools; 159 classroom teachers were randomized either to receive coaching or serve as comparisons. Specifically, multilevel analyses indicated that teacher self-reported culturally responsive behavior management improved over the course of the school year for teachers who received the coaching and professional development. However, the average annual office discipline referrals issued to black students were reduced among teachers who were randomly assigned to receive coaching relative to comparison teachers. Similarly, observations conducted by trained external raters indicated significantly more teacher proactive behavior management and anticipation of student problems, higher student compliance, less student non-compliance, and less socially disruptive behaviors in classrooms led by coached teachers than classrooms led teachers randomly assigned to the non-coached condition. These findings indicated promising effects of the Double Check model on a range of teacher and student outcomes, including disproportionality in office discipline referrals among Black students. These results also suggest that the Double Check model is one of only a few systematic approaches to promoting culturally-responsive behavior management which has been rigorously tested and shown to be associated with improvements in either student or staff outcomes indicated significant reductions in discipline problems and improvements in behavior management. Implications of these findings are considered within the broader context of globalization and demographic shifts, and their impacts on schools. These issues are particularly timely, given growing concerns about immigration policies in the U.S. and abroad.

Keywords: ethnically and culturally diverse students, student engagement, school-based prevention, academic achievement

Procedia PDF Downloads 261
103 A Proposal of a Strategic Framework for the Development of Smart Cities: The Argentinian Case

Authors: Luis Castiella, Mariano Rueda, Catalina Palacio

Abstract:

The world’s rapid urbanisation represents an excellent opportunity to implement initiatives that are oriented towards a country’s general development. However, this phenomenon has created considerable pressure on current urban models, pushing them nearer to a crisis. As a result, several factors usually associated with underdevelopment have been steadily rising. Moreover, actions taken by public authorities have not been able to keep up with the speed of urbanisation, which has impeded them from meeting the demands of society, responding with reactionary policies instead of with coordinated, organised efforts. In contrast, the concept of a Smart City which emerged around two decades ago, in principle, represents a city that utilises innovative technologies to remedy the everyday issues of the citizen, empowering them with the newest available technology and information. This concept has come to adopt a wider meaning, including human and social capital, as well as productivity, economic growth, quality of life, environment and participative governance. These developments have also disrupted the management of institutions such as academia, which have become key in generating scientific advancements that can solve pressing problems, and in forming a specialised class that is able to follow up on these breakthroughs. In this light, the Ministry of Modernisation of the Argentinian Nation has created a model that is rooted in the concept of a ‘Smart City’. This effort considered all the dimensions that are at play in an urban environment, with careful monitoring of each sub-dimensions in order to establish the government’s priorities and improving the effectiveness of its operations. In an attempt to ameliorate the overall efficiency of the country’s economic and social development, these focused initiatives have also encouraged citizen participation and the cooperation of the private sector: replacing short-sighted policies with some that are coherent and organised. This process was developed gradually. The first stage consisted in building the model’s structure; the second, at applying the method created on specific case studies and verifying that the mechanisms used respected the desired technical and social aspects. Finally, the third stage consists in the repetition and subsequent comparison of this experiment in order to measure the effects on the ‘treatment group’ over time. The first trial was conducted on 717 municipalities and evaluated the dimension of Governance. Results showed that levels of governmental maturity varied sharply with relation to size: cities with less than 150.000 people had a strikingly lower level of governmental maturity than cities with more than 150.000 people. With the help of this analysis, some important trends and target population were made apparent, which enabled the public administration to focus its efforts and increase its probability of being successful. It also permitted to cut costs, time, and create a dynamic framework in tune with the population’s demands, improving quality of life with sustained efforts to develop social and economic conditions within the territorial structure.

Keywords: composite index, comprehensive model, smart cities, strategic framework

Procedia PDF Downloads 156
102 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine

Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska

Abstract:

The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: radial engine, ignition system, non-uniformity, combustion process

Procedia PDF Downloads 337
101 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 292
100 Translation, Cross-Cultural Adaption, and Validation of the Vividness of Movement Imagery Questionnaire 2 (VMIQ-2) to Classical Arabic Language

Authors: Majid Alenezi, Abdelbare Algamode, Amy Hayes, Gavin Lawrence, Nichola Callow

Abstract:

The purpose of this study was to translate and culturally adapt the Vividness of Movement Imagery Questionnaire-2 (VMIQ-2) from English to produce a new Arabic version (VMIQ-2A), and to evaluate the reliability and validity of the translated questionnaire. The questionnaire assesses how vividly and clearly individuals are able to imagine themselves performing everyday actions. Its purpose is to measure individuals’ ability to conduct movement imagery, which can be defined as “the cognitive rehearsal of a task in the absence of overt physical movement.” Movement imagery has been introduced in physiotherapy as a promising intervention technique, especially when physical exercise is not possible (e.g. pain, immobilisation.) Considerable evidence indicates movement imagery interventions improve physical function, but to maximize efficacy it is important to know the imagery abilities of the individuals being treated. Given the increase in the global sharing of knowledge it is desirable to use standard measures of imagery ability across language and cultures, thus motivating this project. The translation procedure followed guidelines from the Translation and Cultural Adaptation group of the International Society for Pharmacoeconomics and Outcomes Research and involved the following phases: Preparation; the original VMIQ-2 was adapted slightly to provide additional information and simplified grammar. Forward translation; three native speakers resident in Saudi Arabia translated the original VMIQ-2 from English to Arabic, following instruction to preserve meaning (not literal translation), and cultural relevance. Reconciliation; the project manager (first author), the primary translator and a physiotherapist reviewed the three independent translations to produce a reconciled first Arabic draft of VMIQ-2A. Backward translation; a fourth translator (native Arabic speaker fluent in English) translated literally the reconciled first Arabic draft to English. The project manager and two study authors compared the English back translation to the original VMIQ-2 and produced the second Arabic draft. Cognitive debriefing; to assess participants’ understanding of the second Arabic draft, 7 native Arabic speakers resident in the UK completed the questionnaire, and rated the clearness of the questions, specified difficult words or passages, and wrote in their own words their understanding of key terms. Following review of this feedback, a final Arabic version was created. 142 native Arabic speakers completed the questionnaire in community meeting places or at home; a subset of 44 participants completed the questionnaire a second time 1 week later. Results showed the translated questionnaire to be valid and reliable. Correlation coefficients indicated good test-retest reliability. Cronbach’s a indicated high internal consistency. Construct validity was tested in two ways. Imagery ability scores have been found to be invariant across gender; this result was replicated within the current study, assessed by independent-samples t-test. Additionally, experienced sports participants have higher imagery ability than those less experienced; this result was also replicated within the current study, assessed by analysis of variance, supporting construct validity. Results provide preliminary evidence that the VMIQ-2A is reliable and valid to be used with a general population who are native Arabic speakers. Future research will include validation of the VMIQ-2A in a larger sample, and testing validity in specific patient populations.

Keywords: motor imagery, physiotherapy, translation and validation, imagery ability

Procedia PDF Downloads 304
99 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control

Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul

Abstract:

Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).

Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning

Procedia PDF Downloads 124
98 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 207
97 Scenario-Based Learning Using Virtual Optometrist Applications

Authors: J. S. M. Yang, G. E. T. Chua

Abstract:

Diploma in Optometry (OPT) course is a three-year program offered by Ngee Ann Polytechnic (NP) to train students to provide primary eye care. Students are equipped with foundational conceptual knowledge and practical skills in the first three semesters before clinical modules in fourth to six semesters. In the clinical modules, students typically have difficulties in integrating the acquired knowledge and skills from the past semesters to perform general eye examinations on public patients at NP Optometry Centre (NPOC). To help the students overcome the challenge, a web-based game Virtual Optometrist (VO) was developed to help students apply their skills and knowledge through scenario-based learning. It consisted of two interfaces, Optical Practice Counter (OPC) and Optometric Consultation Room (OCR), to provide two simulated settings for authentic learning experiences. In OPC, students would recommend and provide appropriate frame and lens selection based on virtual patient’s case history. In OCR, students would diagnose and manage virtual patients with common ocular conditions. Simulated scenarios provided real-world clinical situations that required contextual application of integrated knowledge from relevant modules. The stages in OPC and OCR are of increasing complexity to align to expected students’ clinical competency as they progress to more senior semesters. This prevented gameplay fatigue as VO was used over the semesters to achieve different learning outcomes. Numerous feedback opportunities were provided to students based on their decisions to allow individualized learning to take place. The game-based learning element in VO was achieved through the scoreboard and leader board to enhance students' motivation to perform. Scores were based on the speed and accuracy of students’ responses to the questions posed in the simulated scenarios, preparing the students to perform accurately and effectively under time pressure in a realistic optometric environment. Learning analytics was generated in VO’s backend office based on students’ responses, offering real-time data on distinctive and observable learners’ behavior to monitor students’ engagement and learning progress. The backend office allowed versatility to add, edit, and delete scenarios for different intended learning outcomes. Likert Scale was used to measure students’ learning experience with VO for OPT Year 2 and 3 students. The survey results highlighted the learning benefits of implementing VO in the different modules, such as enhancing recall and reinforcement of clinical knowledge for contextual application to develop higher-order thinking skills, increasing efficiency in clinical decision-making, facilitating learning through immediate feedback and second attempts, providing exposure to common and significant ocular conditions, and training effective communication skills. The results showed that VO has been useful in reinforcing optometry students’ learning and supporting the development of higher-order thinking, increasing efficiency in clinical decision-making, and allowing students to learn from their mistakes with immediate feedback and second attempts. VO also exposed the students to diverse ocular conditions through simulated real-world clinical scenarios, which may otherwise not be encountered in NPOC, and promoted effective communication skills.

Keywords: authentic learning, game-based learning, scenario-based learning, simulated clinical scenarios

Procedia PDF Downloads 93