Search results for: laboratory and field conditions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18424

Search results for: laboratory and field conditions

1234 Survey of the Literacy by Radio Project as an Innovation in Literacy Promotion in Nigeria

Authors: Stella Chioma Nwizu

Abstract:

The National Commission for Adult and Non Formal Education (NMEC) in Nigeria is charged with the reduction of illiteracy rate through the development, monitoring, and supervision of literacy programmes in Nigeria. In spite of various efforts by NMEC to reduce illiteracy, literature still shows that the illiteracy rate is still high. According to NMEC/UNICEF, about 60 million Nigerians are non-literate, and nearly two thirds of them are women. This situation forced the government to search for innovative and better approaches to literacy promotion and delivery. The literacy by radio project was adopted as an innovative intervention to literacy delivery in Nigeria because the radio is the cheapest and most easily affordable medium for non-literates. The project aimed at widening access to literacy programmes for the non-literate marginalized and disadvantaged groups in Nigeria by taking literacy programmes to their door steps. The literacy by radio has worked perfectly well in non-literacy reduction in Cuba. This innovative intervention of literacy by radio is anchored on the diffusion of innovation theory by Rogers. The literacy by radio has been going on for fifteen years and the efficacy and contributions of this innovation need to be investigated. Thus, the purpose of this research is to review the contributions of the literacy by radio in Nigeria. The researcher adopted the survey research design for the study. The population for the study consisted of 2,706 participants and 47 facilitators of the literacy by radio programme in the 10 pilot states in Nigeria. A sample of four states made up of 302 participants and eight facilitators were used for the study. Information was collected through Focus Group Discussion (FGD), interviews and content analysis of official documents. The data were analysed qualitatively to review the contributions of literacy by radio project and determine the efficacy of this innovative approach in facilitating literacy in Nigeria. Results from the field experience showed, among others, that more non-literates have better access to literacy programmes through this innovative approach. The pilot project was 88% successful; not less than 2,110 adults were made literate through the literacy by radio project in 2017. However, lack of enthusiasm and commitment on the part of the technical committee and facilitators due to non-payment of honorarium, poor signals from radio stations, interruption of lectures with adverts, low community involvement in decision making in the project are challenges to the success rate of the project. The researcher acknowledges the need to customize all materials and broadcasts in all the dialects of the participants and the inclusion of more civil rights, environmental protection and agricultural skills into the project. The study recommends among others, improved and timely funding of the project by the Federal Government to enable NMEC to fulfill her obligations towards the greater success of the programme, setting up of independent radio stations for airing the programmes and proper monitoring and evaluation of the project by NMEC and State Agencies for greater effectiveness. In an era of the knowledge-driven economy, no one should be allowed to get saddled with the weight of illiteracy.

Keywords: innovative approach, literacy, project, radio, survey

Procedia PDF Downloads 62
1233 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States

Authors: Alexis Polanco Jr, Tsai Lu Liu

Abstract:

Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.

Keywords: deaf, hard of hearing, design, guidelines, education, assessment

Procedia PDF Downloads 65
1232 The Effect of Chloride Dioxide and High Concentration of CO2 Gas Injection on the Quality and Shelf-Life for Exporting Strawberry 'Maehyang' in Modified Atmosphere Condition

Authors: Hyuk Sung Yoon, In-Lee Choi, Mohammad Zahirul Islam, Jun Pill Baek, Ho-Min Kang

Abstract:

The strawberry ‘Maehyang’ cultivated in South Korea has been increased to export to Southeast Asia. The degradation of quality often occurs in strawberries during short export period. Botrytis cinerea has been known to cause major damage to the export strawberries and the disease was caused during shipping and distribution. This study was conducted to find out the sterilized effect of chlorine dioxide(ClO2) gas and high concentration of CO2 gas injection for ‘Maehyang’ strawberry and it was packaged with oxygen transmission rate (OTR) films. The strawberry was harvested at 80% color changed stage and packaged with OTR film and perforated film (control). The treatments were a MAP used by with 20,000 cc·m-2·day·atm OTR film and gas injection in packages. The gas type of ClO2 and CO2 were injected into OTR film packages, and treatments were 6 mg/L ClO2, 15% CO2, and they were combined. The treated strawberries were stored at 3℃ for 30 days. Fresh weight loss rate was less than 1% in all OTR film packages but it was more than 15% in a perforated film treatment that showed severe deterioration of visual quality during storage. Carbon dioxide concentration within a package showed approximately 15% of the maximum CO2 concentration in all treatments except control until the 21st day, it was the tolerated range of maximum CO2 concentration of strawberry in recommended CA or MA conditions. But, it increased to almost 50% on the 30th day. Oxygen concentration showed a decrease down to approximately 0% in all treatments except control for 25 days. Ethylene concentration was shown to be steady until the 17th day, but it quickly increased on the 17th day and dropped down on the final storage day (30th day). All treatments did not show any significant differences in gas treatments. Firmness increased in CO2 (15%) and ClO2 (6mg/L) + CO2 (15%) treatments during storage. It might be the effect of high concentration CO2 known by reducing decay and cell wall degradation. The soluble solid decreased in all treatments during storage. These results were caused to use up the sugar by the increase of respiration during storage. The titratable acidity showed a similarity in all treatments. Incidence of fungi was 0% in CO2 (15%) and ClO2 (6mg/L)+ CO2 (15%), but was more than 20% in a perforated film treatment. Consequently, The result indicates that Chloride Dioxide(ClO2) and high concentration of CO2 inhibited fungi growth. Due to the fact that fresh weight loss rate and incidence of fungi were lower, the ClO2(6mg/L)+ CO2(15%) prove to be most efficient in sterilization. These results suggest that Chloride Dioxide (ClO2) and high concentration of CO2 gas injection treatments were an effective decontamination technique for improving the safety of strawberries.

Keywords: chloride dioxide, high concentration of CO2, modified atmosphere condition, oxygen transmission rate films

Procedia PDF Downloads 338
1231 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids

Authors: Ayalew Yimam Ali

Abstract:

The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement

Procedia PDF Downloads 14
1230 Fast Detection of Local Fiber Shifts by X-Ray Scattering

Authors: Peter Modregger, Özgül Öztürk

Abstract:

Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.

Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination

Procedia PDF Downloads 60
1229 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 183
1228 Photovoice-Through Photographs to Feelings: Investigation of Experience Reporting in a Randomized Controlled Study

Authors: Selina Studer, Maria Kleinstäuber, Cornelia Weise

Abstract:

Background: Finding words to report what you have been through may be challenging, especially when dealing with stressful or highly emotional experiences. Photovoice (PV) represents a possible way of facilitating experience reporting. In this approach, people take photos about a particular topic (in our study: worries about the future) and talk about the topic based on the photos. So far, the benefits of Photovoice have been quantitatively insufficiently tested. There is a lack of randomized controlled trials investigating PV in comparison to other methods. This study aimed to fill this research gap. Methods: 65 participants took part in the study and were randomly assigned to the PV group, the writing group (WG), or the control group (CG). The PV group received the task to take photos of worries regarding the future for one week and send max. 5 of them to the interviewer before the interview. The WG had to write down the worries about the future and send max. 5 of them to the interviewer before the interview. The control group did not receive a specific assignment. The semi-structured interview consisted of six open-ended questions and was applied to all future worries. The questions included the content of the future worries, the meaning, and how the worry expressed itself emotionally and physically. The interview was recorded and later transcribed. After the interview, online questionnaires were filled out. They covered a range of variables such as access to emotional content, ability to describe feelings, the extent of self-disclosure, and relationship quality. Results: Contrary to our hypotheses, one-way ANOVA revealed no differences between the three conditions concerning all variables (access to emotional content, ability to describe feelings, the extent of self-disclosure, and so on), all p's > 0.14, BF₀₁ = 1.78-7.66. In a subsequent step, the words in the transcribed interviews were analyzed. The LIWC program counted how many emotional words occurred in the text and assigned them to predefined categories. Planned contrasts revealed that the PV reported more negative emotional words compared to the two groups t(62) = 2.62, p = .011, and also compared to the WG only, t(62) = 2.36, p = .022, BF₀₁ = 0.62. Conclusions and implications: The applied self-report instruments did not reveal any differences between the groups. However, the PV group used more negative emotional words than the other two groups. The discrepancy between self-report and observation variables regarding emotionality is noticeable. It is suggested that the highly educated and above-average female sample may not have needed PV to access emotional content. It is possible that the approach would yield clearer results in a clinical sample. This and other approaches are currently being investigated in a follow-up study.

Keywords: photovoice, controlled randomized study, online intervention, emotional awareness, self-disclosure, data triangulation, interviews

Procedia PDF Downloads 69
1227 Intercultural Strategies of Chinese Composers in the Organizational Structure of Their Works

Authors: Bingqing Chen

Abstract:

The Opium War unlocked the gate of China. Since then, modern western culture has been imported strongly and spread throughout this Asian country. The monologue of traditional Chinese culture in the past has been replaced by the hustle and bustle of multiculturalism. In the field of music, starting from school music, China, a country without the concept of composition, was deeply influenced by western culture and professional music composition, and entered the era of professional music composition. Recognizing the importance of national culture, a group of insightful artists began to try to add ‘China’ to musical composition. However, due to the special historical origin of Chinese professional musical composition and the three times of cultural nihilism in China, professional musical composition at this time failed to interpret the deep language structure of local culture within Chinese traditional culture, but only regarded Chinese traditional music as a ‘melody material library.’ At this time, the cross-cultural composition still takes Western music as its ‘norm,’ while our own music culture only exists as the sound of the contrast of Western music. However, after reading scores extensively, watching video performances, and interviewing several active composers, we found that at least in the past 30 years, China has created some works that can be called intercultural music. In these kinds of music, composers put Chinese and Western, traditional and modern in an almost equal position to have a dialogue based on their deep understanding and respect for the two cultures. This kind of music connects two music worlds, and links the two cultural and ideological worlds behind it, and communicates and grows together. This paper chose the works of three composers with different educational backgrounds, and pay attention to how composers can make a dialogue at the organizational structure level of their works. Based on the strategies adopted by composers in structuring their works, this paper expounds on how the composer's music procedure shows intercultural in terms of whole sound effects and cultural symbols. By actively participating in this intercultural practice, composers resorting to various musical and extra-musical procedures to arrive at the so-called ‘innovation within tradition.’ Through the dialogue, we can activate the space of creative thinking and explore the potential contained in culture. This interdisciplinary research promotes the rethinking of the possibility of innovation in contemporary Chinese intercultural music composition, spanning the fields of sound studies, dialogue theory, cultural research, music theory, and so on. Recently, China is calling for actively promoting 'the construction of Chinese music canonization,’ expecting to form a particular music style to show national-cultural identity. In the era of globalization, it is possible to form a brand-new Chinese music style through intercultural composition, but it is a question about talents, and the key lies in how composers do it. There is no recipe for the formation of the Chinese music style, only the composers constantly trying and tries to solve problems in their works.

Keywords: dialogism, intercultural music, national-cultural identity, organization/structure, sound

Procedia PDF Downloads 110
1226 Determination of Friction and Damping Coefficients of Folded Cover Mechanism Deployed by Torsion Springs

Authors: I. Yilmaz, O. Taga, F. Kosar, O. Keles

Abstract:

In this study, friction and damping coefficients of folded cover mechanism were obtained in accordance with experimental studies and data. Friction and damping coefficients are the most important inputs to accomplish a mechanism analysis. Friction and damping are two objects that change the time of deployment of mechanisms and their dynamic behaviors. Though recommended friction coefficient values exist in literature, damping is differentiating feature according to mechanic systems. So the damping coefficient should be obtained from mechanism test outputs. In this study, the folded cover mechanism use torsion springs for deploying covers that are formerly close folded position. Torsion springs provide folded covers with desirable deploying time according to variable environmental conditions. To verify all design revisions with system tests will be so costly so that some decisions are taken in accordance with numerical methods. In this study, there are two folded covers required to deploy simultaneously. Scotch-yoke and crank-rod mechanisms were combined to deploy folded covers simultaneously. The mechanism was unlocked with a pyrotechnic bolt onto scotch-yoke disc. When pyrotechnic bolt was exploded, torsion springs provided rotational movement for mechanism. Quick motion camera was recording dynamic behaviors of system during deployment case. Dynamic model of mechanism was modeled as rigid body with Adams MBD (multi body dynamics) then torque values provided by torsion springs were used as an input. A well-advised range of friction and damping coefficients were defined in Adams DOE (design of experiment) then a large number of analyses were performed until deployment time of folded covers run in with test data observed in record of quick motion camera, thus the deployment time of mechanism and dynamic behaviors were obtained. Same mechanism was tested with different torsion springs and torque values then outputs were compared with numerical models. According to comparison, it was understood that friction and damping coefficients obtained in this study can be used safely when studying on folded objects required to deploy simultaneously. In addition to model generated with Adams as rigid body the finite element model of folded mechanism was generated with Abaqus then the outputs of rigid body model and finite element model was compared. Finally, the reasonable solutions were suggested about different outputs of these solution methods.

Keywords: damping, friction, pyro-technic, scotch-yoke

Procedia PDF Downloads 317
1225 Using Teachers' Perceptions of Science Outreach Activities to Design an 'Optimum' Model of Science Outreach

Authors: Victoria Brennan, Andrea Mallaburn, Linda Seton

Abstract:

Science outreach programmes connect school pupils with external agencies to provide activities and experiences that enhance their exposure to science. It can be argued that these programmes not only aim to support teachers with curriculum engagement and promote scientific literacy but also provide pivotal opportunities to spark scientific interest in students. In turn, a further objective of these programmes is to increase awareness of career opportunities within this field. Although outreach work is also often described as a fun and satisfying venture, a plethora of researchers express caution to how successful the processes are to increases engagement post-16 in science. When researching the impact of outreach programmes, it is often student feedback regarding the activities or enrolment numbers to particular science courses post-16, which are generated and analysed. Although this is informative, the longevity of the programme’s impact could be better informed by the teacher’s perceptions; the evidence of which is far more limited in the literature. In addition, there are strong suggestions that teachers can have an indirect impact on a student’s own self-concept. These themes shape the focus and importance of this ongoing research project as it presents the rationale that teachers are under-used resources when it comes to considering the design of science outreach programmes. Therefore, the end result of the research will consist of a presentation of an ‘optimum’ model of outreach. The result of which should be of interest to the wider stakeholders such as universities or private or government organisations who design science outreach programmes in the hope to recruit future scientists. During phase one, questionnaires (n=52) and interviews (n=8) have generated both quantitative and qualitative data. These have been analysed using the Wilcoxon non-parametric test to compare teachers’ perceptions of science outreach interventions and thematic analysis for open-ended questions. Both of these research activities provide an opportunity for a cross-section of teacher opinions of science outreach to be obtained across all educational levels. Therefore, an early draft of the ‘optimum’ model of science outreach delivery was generated using both the wealth of literature and primary data. This final (ongoing) phase aims to refine this model using teacher focus groups to provide constructive feedback about the proposed model. The analysis uses principles of modified Grounded Theory to ensure that focus group data is used to further strengthen the model. Therefore, this research uses a pragmatist approach as it aims to focus on the strengths of the different paradigms encountered to ensure the data collected will provide the most suitable information to create an improved model of sustainable outreach. The results discussed will focus on this ‘optimum’ model and teachers’ perceptions of benefits and drawbacks when it comes to engaging with science outreach work. Although the model is still a ‘work in progress’, it provides both insight into how teachers feel outreach delivery can be a sustainable intervention tool within the classroom and what providers of such programmes should consider when designing science outreach activities.

Keywords: educational partnerships, science education, science outreach, teachers

Procedia PDF Downloads 118
1224 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 126
1223 Conceptualization and Assessment of Key Competencies for Children in Preschools: A Case Study in Southwest China

Authors: Yumei Han, Naiqing Song, Xiaoping Yang, Yuping Han

Abstract:

This study explores the conceptualization of key competencies that children are expected to develop in three year preschools (age 3-6) and the assessment practices of such key competencies in China. Assessment of children development has been put into the central place of early childhood education quality evaluation system in China. In the context of students key competencies development centered education reform in China, defining and selecting key competencies of children in preschools are of great significance in that they would lay a solid foundation for children’s lifelong learning path, and they would lead to curriculum and instruction reform, teacher development reform as well as quality evaluation reform in the early childhood education area. Based on sense making theory and framework, this study adopted multiple stakeholders’ (early childhood educators, parents, evaluation administrators, scholars in the early childhood education field) perspectives and grass root voices to conceptualize and operationalize key competencies for children in preschools in Southwest China. On the ground of children development theories, Chinese and international literature related to children development and key competencies, and key competencies frameworks by UNESCO, OECD and other nations, the authors designed a two-phase sequential mixed method study to address three main questions: (a) How is early childhood key competency defined or labeled from literature and from different stakeholders’ views? (b) Based on the definitions explicated in the literature and the surveys on different stakeholders, what domains and components are regarded to constitute the key competency framework of children in three-year preschools in China? (c) How have early childhood key competencies been assessed and measured, and how such assessment and measurement contribute to enhancing early childhood development quality? On the first phase, a series of focus group surveys were conducted among different types of stakeholders around the research questions. Moreover, on the second phase, based on the coding of the participants’ answers, together with literature synthesis findings, a questionnaire survey was designed and conducted to select most commonly expected components of preschool children’s key competencies. Semi-structured open questions were also included in the questionnaire for the participants to add on competencies beyond the checklist. Rudimentary findings show agreeable concerns on the significance and necessity of conceptualization and assessment of key competencies for children in preschools, and a key competencies framework composed of 7 domains and 25 indicators was constructed. Meanwhile, the findings also show issues in the current assessment practices of children’s competencies, such as lack of effective assessment tools, lack of teacher capacity in applying the tools to evaluating children and advancing children development accordingly. Finally, the authors put forth suggestions and implications for China and international communities in terms of restructuring early childhood key competencies framework, and promoting child development centered reform in early childhood education quality evaluation and development.

Keywords: assessment, conceptualization, early childhood education quality in China, key competencies

Procedia PDF Downloads 243
1222 Effects of Centrifugation, Encapsulation Method and Different Coating Materials on the Total Antioxidant Activity of the Microcapsules of Powdered Cherry Laurels

Authors: B. Cilek Tatar, G. Sumnu, M. Oztop, E. Ayaz

Abstract:

Encapsulation protects sensitive food ingredients against heat, oxygen, moisture and pH until they are released to the system. It can mask the unwanted taste of nutrients that are added to the foods for fortification purposes. Cherry laurels (Prunus laurocerasus) contain phenolic compounds which decrease the proneness to several chronic diseases such as types of cancer and cardiovascular diseases. The objective of this research was to study the effects of centrifugation, different coating materials and homogenization methods on microencapsulation of powders obtained from cherry laurels. In this study, maltodextrin and mixture of maltodextrin:whey protein with a ratio of 1:3 (w/w) were chosen as coating materials. Total solid content of coating materials was kept constant as 10% (w/w). Capsules were obtained from powders of freeze-dried cherry laurels through encapsulation process by silent crusher homogenizer or microfluidization. Freeze-dried cherry laurels were core materials and core to coating ratio was chosen as 1:10 by weight. To homogenize the mixture, high speed homogenizer was used at 4000 rpm for 5 min. Then, silent crusher or microfluidizer was used to complete encapsulation process. The mixtures were treated either by silent crusher for 1 min at 75000 rpm or microfluidizer at 50 MPa for 3 passes. Freeze drying for 48 hours was applied to emulsions to obtain capsules in powder form. After these steps, dry capsules were grounded manually into a fine powder. The microcapsules were analyzed for total antioxidant activity with DPPH (1,1-diphenyl-2-picrylhydrazyl) radical scavenging method. Prior to high speed homogenization, the samples were centrifuged (4000 rpm, 1 min). Centrifugation was found to have positive effect on total antioxidant activity of capsules. Microcapsules treated by microfluidizer were found to have higher total antioxidant activities than those treated by silent crusher. It was found that increasing whey protein concentration in coating material (using maltodextrin:whey protein 1:3 mixture) had positive effect on total antioxidant activity for both silent crusher and microfluidization methods. Therefore, capsules prepared by microfluidization of centrifuged mixtures can be selected as the best conditions for encapsulation of cherry laurel powder by considering their total antioxidant activity. In this study, it was shown that capsules prepared by these methods can be recommended to be incorporated into foods in order to enhance their functionality by increasing antioxidant activity.

Keywords: antioxidant activity, cherry laurel, microencapsulation, microfluidization

Procedia PDF Downloads 289
1221 Reaching Students Who “Don’t Like Writing” through Scenario Based Learning

Authors: Shahira Mahmoud Yacout

Abstract:

Writing is an essential skill in many vocational, academic environments, and notably workplaces, yet many students perceive writing as being something tiring and boring or maybe a “waste of time”. Studies in the field of foreign languages related this fact might be due to the lack of connection between what is learned in the university and what students come to encounter in real life situations”. Arabic learners felt they needed more language exposure to the context of their future professions. With this idea in mind, Scenario based learning (SBL) is reported to be an educational approach to motivate, engage and stimulate students’ interest and to achieve the desired writing learning outcomes. In addition, researchers suggested Scenario based learning (SBL)as an instructional approach that develops and enhances students skills through developing higher order thinking skills and active learning. It is a subset of problem-based learning and case-based learning. The approach focuses on authentic rhetorical framing reflecting writing tasks in real life situations. It works successfully when used to simulate real-world practices, providing context that reflects the types of situations professionals respond to in writing. It was claimed that using realistic scenarios customized to the course’s learning objectives as it bridged the gap for students between theory and application. Within this context, it is thought that scenario-based learning is an important approach to enhance the learners’ writing skills and to reflect meaningful learning within authentic contexts. As an Arabicforeign language instructor, it was noticed that students find difficulties in adapting writing styles to authentic writing contexts and addressing different audiences and purposes. This idea is supported by studieswho claimed that AFL students faced difficulties with transferring writing skills to situations outside of the classroom context. In addition, it was observed that some of the Arabic textbooks for teaching Arabic as a foreign language lacked topics that initiated higher order thinking skills and stimulated the learners to understand the setting, and created messages appropriate to different audiences, context, and purposes. The goals of this study are to 1)provide a rational for using scenario-based learning approach to improveAFL learners in writing skills, 2) demonstrate how to design/ implement a scenario-based learning technique aligned with the writing course objectives,3) demonstrate samples of scenario-based approach implemented in AFL writing class, and 4)emphasis the role of peer-review along with the instructor’s feedback, in the process of developing the writing skill. Finally, this presentation highlighted and emphasized the importance of using the scenario-based learning approach in writing as a means to mirror students’ real-life situations and engage them in planning, monitoring, and problem solving. This approach helped in making writing an enjoyable experience and clearly useful to students’ future professional careers.

Keywords: meaningful learning, real life contexts, scenario based learning, writing skill

Procedia PDF Downloads 97
1220 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors

Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy

Abstract:

The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.

Keywords: hyperkalemia, drift, AACVD, organosilicon

Procedia PDF Downloads 120
1219 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions

Authors: Gaurangi Saxena, Ravindra Saxena

Abstract:

Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.

Keywords: cloud computing, competitive advantage, customer relationship management, grid computing

Procedia PDF Downloads 307
1218 In Vitro Evaluation of a Chitosan-Based Adhesive to Treat Bone Fractures

Authors: Francisco J. Cedano, Laura M. Pinzón, Camila I. Castro, Felipe Salcedo, Juan P. Casas, Juan C. Briceño

Abstract:

Complex fractures located in articular surfaces are challenging to treat and their reduction with conventional treatments could compromise the functionality of the affected limb. An adhesive material to treat those fractures is desirable for orthopedic surgeons. This adhesive must be biocompatible and have a high adhesion to bone surface in an aqueous environment. The proposed adhesive is based on chitosan, given its adhesive and biocompatibility properties. Chitosan is mixed with calcium carbonate and hydroxyapatite, which contribute to structural support and a gel like behavior, and glutaraldehyde is used as a cross-linking agent to keep the adhesive mechanical performance in aqueous environment. This work aims to evaluate the rheological, adhesion strength and biocompatibility properties of the proposed adhesive using in vitro tests. The gelification process of the adhesive was monitored by oscillatory rheometry in an ARG-2 TA Instruments rheometer, using a parallel plate geometry of 22 mm and a gap of 1 mm. Time sweep experiments were conducted at 1 Hz frequency, 1% strain and 37°C from 0 to 2400 s. Adhesion strength is measured using a butt joint test with bovine cancellous bone fragments as substrates. The test is conducted at 5 min, 20min and 24 hours after curing the adhesive under water at 37°C. Biocompatibility is evaluated by a cytotoxicity test in a fibroblast cell culture using MTT assay and SEM. Rheological results concluded that the average gelification time of the adhesive is 820±107 s, also it reaches storage modulus magnitudes up to 106 Pa; The adhesive show solid-like behavior. Butt joint test showed 28.6 ± 9.2 kPa of tensile bond strength for the adhesive cured for 24 hours. Also there was no significant difference in adhesion strength between 20 minutes and 24 hours. MTT showed 70 ± 23 % of active cells at sixth day of culture, this percentage is estimated respect to a positive control (only cells with culture medium and bovine serum). High vacuum SEM observation permitted to localize and study the morphology of fibroblasts presented in the adhesive. All captured fibroblasts presented in SEM typical flatted structure with filopodia growth attached to adhesive surface. This project reports an adhesive based on chitosan that is biocompatible due to high active cells presented in MTT test and these results were correlated using SEM. Also, it has adhesion properties in conditions that model the clinical application, and the adhesion strength do not decrease between 5 minutes and 24 hours.

Keywords: bioadhesive, bone adhesive, calcium carbonate, chitosan, hydroxyapatite, glutaraldehyde

Procedia PDF Downloads 319
1217 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 426
1216 Winter – Not Spring - Climate Drives Annual Adult Survival in Common Passerines: A Country-Wide, Multi-Species Modeling Exercise

Authors: Manon Ghislain, Timothée Bonnet, Olivier Gimenez, Olivier Dehorter, Pierre-Yves Henry

Abstract:

Climatic fluctuations affect the demography of animal populations, generating changes in population size, phenology, distribution and community assemblages. However, very few studies have identified the underlying demographic processes. For short-lived species, like common passerine birds, are these changes generated by changes in adult survival or in fecundity and recruitment? This study tests for an effect of annual climatic conditions (spring and winter) on annual, local adult survival at very large spatial (a country, 252 sites), temporal (25 years) and biological (25 species) scales. The Constant Effort Site ringing has allowed the collection of capture - mark - recapture data for 100 000 adult individuals since 1989, over metropolitan France, thus documenting annual, local survival rates of the most common passerine birds. We specifically developed a set of multi-year, multi-species, multi-site Bayesian models describing variations in local survival and recapture probabilities. This method allows for a statistically powerful hierarchical assessment (global versus species-specific) of the effects of climate variables on survival. A major part of between-year variations in survival rate was common to all species (74% of between-year variance), whereas only 26% of temporal variation was species-specific. Although changing spring climate is commonly invoked as a cause of population size fluctuations, spring climatic anomalies (mean precipitation or temperature for March-August) do not impact adult survival: only 1% of between-year variation of species survival is explained by spring climatic anomalies. However, for sedentary birds, winter climatic anomalies (North Atlantic Oscillation) had a significant, quadratic effect on adult survival, birds surviving less during intermediate years than during more extreme years. For migratory birds, we do not detect an effect of winter climatic anomalies (Sahel Rainfall). We will analyze the life history traits (migration, habitat, thermal range) that could explain a different sensitivity of species to winter climate anomalies. Overall, we conclude that changes in population sizes for passerine birds are unlikely to be the consequences of climate-driven mortality (or emigration) in spring but could be induced by other demographic parameters, like fecundity.

Keywords: Bayesian approach, capture-recapture, climate anomaly, constant effort sites scheme, passerine, seasons, survival

Procedia PDF Downloads 298
1215 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks

Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba

Abstract:

Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.

Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN

Procedia PDF Downloads 37
1214 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties

Authors: E. Salem

Abstract:

Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.

Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames

Procedia PDF Downloads 113
1213 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study

Authors: Jutta Ransmayr

Abstract:

The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.

Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics

Procedia PDF Downloads 164
1212 Characteristics of the Rocks Glacier Deposits in the Southern Carpathians, Romania

Authors: Petru Urdea

Abstract:

As a distinct part of the mountain system, the rock glacier system is a particularly periglacial debris system. Being an open system, it works in a manner of interconnection with others subsystems like glacial, cliffs, rocky slopes sand talus slope subsystems, which are sources of sediments. One characteristic is that for long periods of time it is like a storage unit for debris, and ice, and temporary for snow and water. In the Southern Carpathians 306 rock glaciers were identified. The vast majority of these rock glaciers, are talus rock glaciers, 74%, and 26%, are debris rock glaciers. In the area occupied by granites and granodiorites are present, 49% of all the rock glaciers, representing 61% of the area occupied by Southern Carpathians rock glaciers. This lithological dependence also leaves its mark on the specifics of the deposits, everything bearing the imprint of the particular way the rocks respond to the physical weathering processes, all in a periglacial regime. If in the domain of granites and granodiorites the blocks are large, - of metric order, even 10 m3 - , in the domain of the metamorphic rocks only gneisses can cut similar sizes. Amphibolites, amphibolitic schists, micaschists, sericite-chlorite schists and phyllites crop out in much smaller blocks, of decimetric order, mostly in the form of slabs. In the case of rock glaciers made up of large blocks, with a strcture of open-works type, the density and volume of voids between the blocks is greater, the smaller debris generating more compact structures with fewer voids. All these influences the thermal regime, associated with a certain type of air circulation during the seasons and the emergence of permafrost formation conditions. The rock glaciers are fed by rock falls, rock avalanches, debris flows, avalanches, so that the structure is heterogeneous, which is also reflected in the detailed topography of the rock glaciers. This heterogeneity is also influenced by the spatial assembly of the rock bodies in the supply area and, an element that cannot be omitted, the behavior of the rocks during periglacial weathering. The production of small gelifracts determines the filling of voids and the appearance of more compact structures, with effects on the creep process. In general, surface deposits are coarser, those in depth are finer, their characteristics being detectable by applying geophysical methods. The electrical tomography (ERT) and georadar (GPR) investigations carried out in the Făgăraş Mountains, Retezat and the Parâng Mountains, each with a different lithological specificity, allowed the identification of some differentiations, including the presence of permafrost bodies.

Keywords: rock glaciers deposits, structure, lithology, permafrost, Southern Carpathians, Romania

Procedia PDF Downloads 20
1211 The Risk of Bleeding in Knee or Shoulder Injections in Patients on Warfarin Treatment

Authors: Muhammad Yasir Tarar

Abstract:

Background: Intraarticular steroid injections are an effective option in alleviating the symptoms of conditions like osteoarthritis, rheumatoid arthritis, crystal arthropathy, and rotator cuff tendinopathy. Most of these injections are conducted in the elderly who are on polypharmacy, including anticoagulants at times. Up to 6% of patients aged 80-84 years have been reported to be taking Warfarin. The literature availability on safety quotient for patients undergoing intraarticular injections on Warfarin is scarce. It has remained debatable over the years which approach is safe for these patients. Continuing warfarin has a theoretical bleeding risk, and stopping it can lead to even severe life-threatening thromboembolic events in high-risk patients. Objectives: To evaluate the risk of bleeding complications in patients on warfarin undergoing intraarticular injections or arthrocentesis. Study Design & Methods: A literature search of MEDLINE (1946 to present), EMBASE (1974 to present), and Cochrane CENTRAL (1988 to present) databases were conducted using any combination of the keywords, Injection, Knee, Shoulder, Joint, Intraarticular, arthrocentesis, Warfarin, and Anticoagulation in November 2020 for articles published in any language with no publication year limit. The study inclusion criteria included reporting on the rate of bleeding complications following injection of the knee or shoulder in patients on warfarin treatment. Randomized control trials and prospective and retrospective study designs were included. An electronic standardized Performa for data extraction was made. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) the methodology was used. The articles were appraised using the methodological index for nonrandomized studies. The Cochrane Risk of Bias Tool used to assess the risk of bias in included RCTs and the MINORS tool for assessment of bias in observational studies. Results: The search of databases resulted in a total of 852 articles. Relevant articles as per the inclusion criteria were shortlisted, 7 articles deemed suitable to be include. A total of 1033 joints sample size was undertaken with specified knee and shoulder joints of a total of 820. Only 6 joints had bleeding complications, 5 early bleeding at the time of injection or aspiration, and one late bleeding complication with INR of 5, additionally, 2 patients complained of bruising, 3 of pain, and 1 managed for infection. Conclusions: The results of the metanalysis show that it is relatively safe to perform intraarticular injections in patients on Warfarin regardless of the INR range.

Keywords: arthrocentesis, warfarin, bleeding, injection

Procedia PDF Downloads 73
1210 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 134
1209 Prismatic Bifurcation Study of a Functionally Graded Dielectric Elastomeric Tube Using Linearized Incremental Theory of Deformations

Authors: Sanjeet Patra, Soham Roychowdhury

Abstract:

In recent times, functionally graded dielectric elastomer (FGDE) has gained significant attention within the realm of soft actuation due to its dual capacity to exert highly localized stresses while maintaining its compliant characteristics on application of electro-mechanical loading. Nevertheless, the full potential of dielectric elastomer (DE) has not been fully explored due to their susceptibility to instabilities when subjected to electro-mechanical loads. As a result, study and analysis of such instabilities becomes crucial for the design and realization of dielectric actuators. Prismatic bifurcation is a type of instability that has been recognized in a DE tube. Though several studies have reported on the analysis for prismatic bifurcation in an isotropic DE tube, there is an insufficiency in studies related to prismatic bifurcation of FGDE tubes. Therefore, this paper aims to determine the onset of prismatic bifurcations on an incompressible FGDE tube when subjected to electrical loading across the thickness of the tube and internal pressurization. The analysis has been conducted by imposing two axial boundary conditions on the tube, specifically axially free ends and axially clamped ends. Additionally, the rigidity modulus of the tube has been linearly graded in the direction of thickness where the inner surface of the tube has a lower stiffness than the outer surface. The static equilibrium equations for deformation of the axisymmetric tube are derived and solved using numerical technique. The condition for prismatic bifurcation of the axisymmetric static equilibrium solutions has been obtained by using the linearized incremental constitutive equations. Two modes of bifurcations, corresponding to two different non-circular cross-sectional geometries, have been explored in this study. The outcomes reveal that the FGDE tubes experiences prismatic bifurcation before the Hessian criterion of failure is satisfied. It is observed that the lower mode of bifurcation can be triggered at a lower critical voltage as compared to the higher mode of bifurcation. Furthermore, the tubes with larger stiffness gradient require higher critical voltages for triggering the bifurcation. Moreover, with the increase in stiffness gradient, a linear variation of the critical voltage is observed with the thickness of the tube. It has been found that on applying internal pressure to a tube with low thickness, the tube becomes less susceptible to bifurcations. A thicker tube with axially free end is found to be more stable than the axially clamped end tube at higher mode of bifurcation.

Keywords: critical voltage, functionally graded dielectric elastomer, linearized incremental approach, modulus of rigidity, prismatic bifurcation

Procedia PDF Downloads 75
1208 A Development of a Simulation Tool for Production Planning with Capacity-Booking at Specialty Store Retailer of Private Label Apparel Firms

Authors: Erika Yamaguchi, Sirawadee Arunyanrt, Shunichi Ohmori, Kazuho Yoshimoto

Abstract:

In this paper, we suggest a simulation tool to make a decision of monthly production planning for maximizing a profit of Specialty store retailer of Private label Apparel (SPA) firms. Most of SPA firms are fabless and make outsourcing deals for productions with factories of their subcontractors. Every month, SPA firms make a booking for production lines and manpower in the factories. The booking is conducted a few months in advance based on a demand prediction and a monthly production planning at that time. However, the demand prediction is updated month by month, and the monthly production planning would change to meet the latest demand prediction. Then, SPA firms have to change the capacities initially booked within a certain range to suit to the monthly production planning. The booking system is called “capacity-booking”. These days, though it is an issue for SPA firms to make precise monthly production planning, many firms are still conducting the production planning by empirical rules. In addition, it is also a challenge for SPA firms to match their products and factories with considering their demand predictabilities and regulation abilities. In this paper, we suggest a model for considering these two issues. An objective is to maximize a total profit of certain periods, which is sales minus costs of production, inventory, and capacity-booking penalty. To make a better monthly production planning at SPA firms, these points should be considered: demand predictabilities by random trends, previous and next month’s production planning of the target month, and regulation abilities of the capacity-booking. To decide matching products and factories for outsourcing, it is important to consider seasonality, volume, and predictability of each product, production possibility, size, and regulation ability of each factory. SPA firms have to consider these constructions and decide orders with several factories per one product. We modeled these issues as a linear programming. To validate the model, an example of several computational experiments with a SPA firm is presented. We suppose four typical product groups: basic, seasonal (Spring / Summer), seasonal (Fall / Winter), and spot product. As a result of the experiments, a monthly production planning was provided. In the planning, demand predictabilities from random trend are reduced by producing products which are different product types. Moreover, priorities to produce are given to high-margin products. In conclusion, we developed a simulation tool to make a decision of monthly production planning which is useful when the production planning is set every month. We considered the features of capacity-booking, and matching of products and factories which have different features and conditions.

Keywords: capacity-booking, SPA, monthly production planning, linear programming

Procedia PDF Downloads 517
1207 Journal Bearing with Controllable Radial Clearance, Design and Analysis

Authors: Majid Rashidi, Shahrbanoo Farkhondeh Biabnavi

Abstract:

The hydrodynamic instability phenomenon in a journal bearing may occur by either a reduction in the load carried by journal bearing, by an increase in the journal speed, by change in the lubricant viscosity, or a combination of these factors. The previous research and development work done to overcome the instability issue of journal bearings, operating in hydrodynamic lubricate regime, can be categorized as follows: A) Actively controlling the bearing sleeve by using piezo actuator, b) Inclusion of strategically located and shaped internal grooves within inner surface of the bearing sleeve, c) Actively controlling the bearing sleeve using an electromagnetic actuator, d)Actively and externally pressurizing the lubricant within a journal bearing set, and e)Incorporating tilting pads within the inner surface of the bearing sleeve that assume different equilibrium angular position in response to changes in the bearing design parameter such as speed and load. This work presents an innovative design concept for a 'smart journal bearing' set to operate in a stable hydrodynamic lubrication regime, despite variations in bearing speed, load, and its lubricant viscosity. The proposed bearing design allows adjusting its radial clearance for an attempt to maintain a stable bearing operation under those conditions that may cause instability for a bearing with a fixed radial clearance. The design concept allows adjusting the radial clearance at small increments in the order of 0.00254 mm. This is achieved by axially moving two symmetric conical rigid cavities that are in close contact with the conically shaped outer shell of a sleeve bearing. The proposed work includes a 3D model of the bearing that depicts the structural interactions of the bearing components. The 3D model is employed to conduct finite element Analyses to simulate the mechanical behavior of the bearing from a structural point of view. The concept of controlling of the radial clearance, as presented in this work, is original and has not been proposed and discuss in previous research. A typical journal bearing was analyzed under a set of design parameters, namely r =1.27 cm (journal radius), c = 0.0254 mm (radial clearance), L=1.27 cm (bearing length), w = 445N (bearing load), μ = 0.028 Pascale (lubricant viscosity). A shaft speed as 3600 r.p.m was considered, and the mass supported by the bearing, m, is set to be 4.38kg. The Summerfield Number associated with the above bearing design parameters turn to be, S=0.3. These combinations resulted in stable bearing operation. Subsequently, the speed was postulated to increase from 3600 r.p.mto 7200 r.p.m; the bearing was found to be unstable under the new increased speed. In order to regain stability, the radial clearance was increased from c = 0.0254 mm to0.0358mm. The change in the radial clearance was shown to bring the bearing back to stable an operating condition.

Keywords: adjustable clearance, bearing, hydrodynamic, instability, journal

Procedia PDF Downloads 279
1206 A Non-Invasive Method for Assessing the Adrenocortical Function in the Roan Antelope (Hippotragus equinus)

Authors: V. W. Kamgang, A. Van Der Goot, N. C. Bennett, A. Ganswindt

Abstract:

The roan antelope (Hippotragus equinus) is the second largest antelope species in Africa. These past decades, populations of roan antelope are declining drastically throughout Africa. This situation resulted in the development of intensive breeding programmes for this species in Southern African, where they are popular game ranching herbivores in with increasing numbers in captivity. Nowadays, avoidance of stress is important when managing wildlife to ensure animal welfare. In this regard, a non-invasive approach to monitor the adrenocortical function as a measure of stress would be preferable, since animals are not disturbed during sample collection. However, to date, a non-invasive method has not been established for the roan antelope. In this study, we validated a non-invasive technique to monitor the adrenocortical function in this species. Herein, we performed an adrenocorticotropic hormone (ACTH) stimulation test at Lapalala reserve Wilderness, South Africa, using adult captive roan antelopes to determine the stress-related physiological responses. Two individually housed roan antelope (a male and a female) received an intramuscular injection with Synacthen depot (Norvatis) loaded into a 3ml syringe (Pneu-Dart) at an estimated dose of 1 IU/kg. A total number of 86 faecal samples (male: 46, female: 40) were collected 5 days before and 3 days post-injection. All samples were then lyophilised, pulverized and extracted with 80% ethanol (0,1g/3ml) and the resulting faecal extracts were analysed for immunoreactive faecal glucocorticoid metabolite (fGCM) concentrations using five enzyme immunoassays (EIAs); (i) 11-oxoaetiocholanolone I (detecting 11,17 dioxoandrostanes), (ii) 11-oxoaetiocholanolone II (detecting fGCM with a 5α-pregnane-3α-ol-11one structure), (iii) a 5α-pregnane-3β-11β,21-triol-20-one (measuring 3β,11β-diol CM), (iv) a cortisol and (v) a corticosterone. In both animals, all EIAs detected an increase in fGCM concentration 100% post-ACTH administration. However, the 11-oxoaetiocholanolone I EIA performed best, with a 20-fold increase in the male (baseline: 0.384 µg/g, DW; peak: 8,585 µg/g DW) and a 17-fold in the female (baseline: 0.323 µg/g DW, peak: 7,276 µg/g DW), measured 17 hours and 12 hours post-administration respectively. These results are important as the ability to assess adrenocortical function non-invasively in roan can now be used as an essential prerequisite to evaluate the effects of stressful circumstances; such as variation of environmental conditions or reproduction in other to improve management strategies for the conservation of this iconic antelope species.

Keywords: adrenocorticotropic hormone challenge, adrenocortical function, captive breeding, non-invasive method, roan antelope

Procedia PDF Downloads 139
1205 The Possible Interaction between Bisphenol A, Caffeine and Epigallocatechin-3-Gallate on Neurotoxicity Induced by Manganese in Rats

Authors: Azza A. Ali, Hebatalla I. Ahmed, Asmaa Abdelaty

Abstract:

Background: Manganese (Mn) is a naturally occurring element. Exposure to high levels of Mn causes neurotoxic effects and represents an environmental risk factor. Mn neurotoxicity is poorly understood but changing of AChE activity, monoamines and oxidative stress has been established. Bisphenol A (BPA) is a synthetic compound widely used in the production of polycarbonate plastics. There is considerable debate about whether its exposure represents an environmental risk. Caffeine is one of the major contributors to the dietary antioxidants which prevent oxidative damage and may reduce the risk of chronic neurodegenerative diseases. Epigallocatechin-3-gallate is another major component of green tea and has known interactions with caffeine. It also has health-promoting effects in CNS. Objective: To evaluate the potential protective effects of Caffeine and/or EGCG against Mn-induced neurotoxicity either alone or in the presence of BPA in rats. Methods: Seven groups of rats were used and received daily for 5 weeks MnCl2.4H2O (10 mg/kg, IP) except the control group which received saline, corn oil and distilled H2O. Mn was injected either alone or in combination with each of the following: BPA (50 mg/kg, PO), caffeine (10 mg/kg, PO), EGCG (5 mg/kg, IP), caffeine + EGCG and BPA +caffeine +EGCG. All rats were examined in five behavioral tests (grid, bar, swimming, open field and Y- maze tests). Biochemical changes in monoamines, caspase-3, PGE2, GSK-3B, glutamate, acetyl cholinesterase and oxidative parameters, as well as histopathological changes in the brain, were also evaluated for all groups. Results: Mn significantly increased MDA and nitrite content as well as caspase-3, GSK-3B, PGE2 and glutamate levels while significantly decreased TAC and SOD as well as cholinesterase in the striatum. It also decreased DA, NE and 5-HT levels in the striatum and frontal cortex. BPA together with Mn enhanced oxidative stress generation induced by Mn while increased monoamine content that was decreased by Mn in rat striatum. BPA abolished neuronal degeneration induced by Mn in the hippocampus but not in the substantia nigra, striatum and cerebral cortex. Behavioral examinations showed that caffeine and EGCG co-administration had more pronounced protective effect against Mn-induced neurotoxicity than each one alone. EGCG alone or in combination with caffeine prevented neuronal degeneration in the substantia nigra, striatum, hippocampus and cerebral cortex induced by Mn while caffeine alone prevented neuronal degeneration in the substantia nigra and striatum but still showed some nuclear pyknosis in cerebral cortex and hippocampus. The marked protection of caffeine and EGCG co-administration also confirmed by the significant increase in TAC, SOD, ACHE, DA, NE and 5-HT as well as the decrease in MDA, nitrite, caspase-3, PGE2, GSK-3B, the glutamic acid in the striatum. Conclusion: Neuronal degeneration induced by Mn showed some inhibition with BPA exposure despite the enhancement in oxidative stress generation. Co-administration of EGCG and caffeine can protect against neuronal degeneration induced by Mn and improve behavioral deficits associated with its neurotoxicity. The protective effect of EGCG was more pronounced than that of caffeine even with BPA co-exposure.

Keywords: manganese, bisphenol a, caffeine, epigallocatechin-3-gallate, neurotoxicity, behavioral tests, rats

Procedia PDF Downloads 225