Search results for: teaching approaches
2081 Language and Culture Exchange: Tandem Language Learning for University Students
Authors: Hebe Wong, Luz Fernandez Calventos
Abstract:
Tandem language learning, a language exchange process based on the principles of autonomy and reciprocity, provides opportunities for interlocutors to learn each other’s language by communicating online or face-to-face. While much attention has been paid to the process and outcomes of tandem learning via email, little has been discussed about the effectiveness of face-to-face tandem learning on language and culture exchange for university students. The LACTS (Language and Culture Tandem Scheme), an 8-week project, was set up to study students’ perceptions of conducting tandem learning to assist their language and culture exchange. Students of both post-graduate and undergraduate programmes (N=103) from a Hong Kong SAR university were put in groups of 4 to 6 according to their availability and language preferences and met for an hour a week. While sample task sheets on a range of topics were provided to assist the language exchange, all groups were encouraged to take charge of their meeting format and choose their own topics. At the end of the project, a 19-item questionnaire, which included both open-and closed-ended questions investigating students’ perceptions of reciprocal teaching and cultural exchange, was administered. Thirty-minute individual interviews were conducted to elicit students’ views and experiences in the LACTS activities. Quantitative and qualitative data analysis showed that most students agreed that the project had enhanced their cultural awareness and helped create an inclusive and participatory learning environment. Significant differences were found in students’ confidence in speaking their targeted language after joining the scheme. The interviews also provided rich data on the variety of formats and leadership patterns in student-led meetings, which could shed light on student autonomy and future tandem language learning projects.Keywords: autonomy, reciprocity, tandem language learning, university students
Procedia PDF Downloads 582080 The Determination of Stress Experienced by Nursing Undergraduate Students during Their Education
Authors: Gülden Küçükakça, Şefika Dilek Güven, Rahşan Kolutek, Seçil Taylan
Abstract:
Objective: Nursing students face with stress factors affecting academic performance and quality of life as from first moments of their educational life. Stress causes health problems in students such as physical, psycho-social, and behavioral disorders and might damage formation of professional identity by decreasing efficiency of education. In addition to determination of stress experienced by nursing students during their education, it was aimed to help review theoretical and clinical education settings for bringing stress of nursing students into positive level and to raise awareness of educators concerning their own professional behaviors. Methods: The study was conducted with 315 students studying at nursing department of Semra and Vefa Küçük Health High School, Nevşehir Hacı Bektaş Veli University in the academic year of 2015-2016 and agreed to participate in the study. “Personal Information Form” prepared by the researchers upon the literature review and “Nursing Education Stress Scale (NESS)” were used in this study. Data were assessed with analysis of variance and correlation analysis. Results: Mean NESS Scale score of the nursing students was estimated to be 66.46±16.08 points. Conclusions: As a result of this study, stress level experienced by nursing undergraduate students during their education was determined to be high. In accordance with this result, it can be recommended to determine sources of stress experienced by nursing undergraduate students during their education and to develop approaches to eliminate these stress sources.Keywords: stress, nursing education, nursing student, nursing education stress
Procedia PDF Downloads 4692079 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 812078 The Global Relationship between the Prevalence of Diabetes Mellitus and Incidence of Tuberculosis: 2000-2012
Authors: Alaa Badawi, Suzan Sayegh, Mohamed Sallam, Eman Sadoun, Mohamed Al-Thani, Muhammad W. Alam, Paul Arora
Abstract:
Background: The dual burden of tuberculosis (TB) and diabetes mellitus (DM) has increased over the past decade with DM prevalence increasing in countries already afflicted with a high burden of TB. The coexistence of the two conditions presents a serious threat to global public health. Objective: The present study examines the global relationship between the prevalence of DM and the incidence of TB to evaluate their coexistence worldwide and their contribution to one another. Methods: This is an ecological longitudinal study covering the period between years 2000 to 2012. We utilized data from the WHO and World Bank sources and International Diabetes Federation to estimate prevalence of DM (%) and the incidence of TB (per 100,000). Measures of central tendency and dispersion as well as the harmonic mean and linear regression were used for different WHO regions. The association between DM prevalence and TB incidence was examined by quartile of DM prevalence. Results: The worldwide average (±S.D.) prevalence of DM within the study period was 6.6±3.8% whereas TB incidence was 135.0±190.5 per 100,000. DM prevalence was highest in the Eastern Mediterranean (8.3±4.1) and West Pacific (8.2±5.6) regions and lowest in the Africa (3.5±2.6). TB incidence was highest in Africa (313.1±275.9 per 100,000) and South-East Asia (216.7±124.9) and lowest in the European (46.5±68.6) and American (47.2±52.9) regions. Only countries with high DM prevalence (>7.6%) showed a significant positive association with TB incidence (r=0.17, p=0.013). Conclusion: A positive association between DM and TB may exist in some – but not all – world regions, a dual burden that necessitates identifying the nature of this coexistence to assist in developing public health approaches that curb their rising burden.Keywords: diabetes mellitus, tuberculosis, disease burden, global association
Procedia PDF Downloads 4662077 Generation Mechanism of Opto-Acoustic Wave from in vivo Imaging Agent
Authors: Hiroyuki Aoki
Abstract:
The optoacoustic effect is the energy conversion phenomenon from light to sound. In recent years, this optoacoustic effect has been utilized for an imaging agent to visualize a tumor site in a living body. The optoacoustic imaging agent absorbs the light and emits the sound signal. The sound wave can propagate in a living organism with a small energy loss; therefore, the optoacoustic imaging method enables the molecular imaging of the deep inside of the body. In order to improve the imaging quality of the optoacoustic method, the more signal intensity is desired; however, it has been difficult to enhance the signal intensity of the optoacoustic imaging agent because the fundamental mechanism of the signal generation is unclear. This study deals with the mechanism to generate the sound wave signal from the optoacoustic imaging agent following the light absorption by experimental and theoretical approaches. The optoacoustic signal efficiency for the nano-particles consisting of metal and polymer were compared, and it was found that the polymer particle was better. The heat generation and transfer process for optoacoustic agents of metal and polymer were theoretically examined. It was found that heat generated in the metal particle rapidly transferred to the water medium, whereas the heat in the polymer particle was confined in itself. The confined heat in the small particle induces the massive volume expansion, resulting in the large optoacoustic signal for the polymeric particle agent. Thus, we showed that heat confinement is a crucial factor in designing the highly efficient optoacoustic imaging agent.Keywords: nano-particle, opto-acoustic effect, in vivo imaging, molecular imaging
Procedia PDF Downloads 1312076 Probabilistic Approach to the Spatial Identification of the Environmental Sources behind Mortality Rates in Europe
Authors: Alina Svechkina, Boris A. Portnov
Abstract:
In line with a rapid increase in pollution sources and enforcement of stricter air pollution regulation, which lowers pollution levels, it becomes more difficult to identify actual risk sources behind the observed morbidity patterns, and new approaches are required to identify potential risks and take preventive actions. In the present study, we discuss a probabilistic approach to the spatial identification of a priori unidentified environmental health hazards. The underlying assumption behind the tested approach is that the observed adverse health patterns (morbidity, mortality) can become a source of information on the geographic location of environmental risk factors that stand behind them. Using this approach, we analyzed sources of environmental exposure using data on mortality rates available for the year 2015 for NUTS 3 (Nomenclature of Territorial Units for Statistics) subdivisions of the European Union. We identified several areas in the southwestern part of Europe as primary risk sources for the observed mortality patterns. Multivariate regressions, controlled by geographical location, climate conditions, GDP (gross domestic product) per capita, dependency ratios, population density, and the level of road freight revealed that mortality rates decline as a function of distance from the identified hazard location. We recommend the proposed approach an exploratory analysis tool for initial investigation of regional patterns of population morbidity patterns and factors behind it.Keywords: mortality, environmental hazards, air pollution, distance decay gradient, multi regression analysis, Europe, NUTS3
Procedia PDF Downloads 1672075 Business Program Curriculum with Industry-Recognized Certifications: An Empirical Study of Exam Results and Program Curriculum
Authors: Thomas J. Bell III
Abstract:
Pursuing a business degree is fraught with perplexing questions regarding the rising tuition cost and the immediate value of earning a degree. Any decision to pursue an undergraduate business degree is perceived to have value if it facilitates post-graduate job placement. Business programs have decreased value in the absence of innovation in business programs that close the skills gap between recent graduates and employment opportunities. Industry-based certifications are seemingly becoming a requirement differentiator among job applicants. Texas Wesleyan University offers a Computer Information System (CIS) program with an innovative curriculum that integrates industry-recognized certification training into its traditional curriculum with core subjects and electives. This paper explores a culture of innovation in the CIS business program curriculum that creates sustainable stakeholder value for students, employers, the community, and the university. A quantitative research methodology surveying over one-hundred students in the CIS program will be used to examine factors influencing the success or failure of students taking certification exams. Researchers will analyze control variables to identify specific correlations between practice exams, teaching pedagogy, study time, age, work experience, etc. This study compared various exam preparation techniques to corresponding exam results across several industry certification exams. The findings will aid in understanding control variables with correlations that positively and negatively impact exam results. Such discovery may provide useful insight into pedagogical impact indicators that positively contribute to certification exam success and curriculum enhancement.Keywords: taking certification exams, exam training, testing skills, exam study aids, certification exam curriculum
Procedia PDF Downloads 882074 Identification and Prioritisation of Students Requiring Literacy Intervention and Subsequent Communication with Key Stakeholders
Authors: Emilie Zimet
Abstract:
During networking and NCCD moderation meetings, best practices for identifying students who require Literacy Intervention are often discussed. Once these students are identified, consideration is given to the most effective process for prioritising those who have the greatest need for Literacy Support and the allocation of resources, tracking of intervention effectiveness and communicating with teachers/external providers/parents. Through a workshop, the group will investigate best practices to identify students who require literacy support and strategies to communicate and track their progress. In groups, participants will examine what they do in their settings and then compare with other models, including the researcher’s model, to decide the most effective path to identification and communication. Participants will complete a worksheet at the beginning of the session to deeply consider their current approaches. The participants will be asked to critically analyse their own identification processes for Literacy Intervention, ensuring students are not overlooked if they fall into the borderline category. A cut-off for students to access intervention will be considered so as not to place strain on already stretched resources along with the most effective allocation of resources. Furthermore, communicating learning needs and differentiation strategies to staff is paramount to the success of an intervention, and participants will look at the frequency of communication to share such strategies and updates. At the end of the session, the group will look at creating or evolving models that allow for best practices for the identification and communication of Literacy Interventions. The proposed outcome for this research is to develop a model of identification of students requiring Literacy Intervention that incorporates the allocation of resources and communication to key stakeholders. This will be done by pooling information and discussing a variety of models used in the participant's school settings.Keywords: identification, student selection, communication, special education, school policy, planning for intervention
Procedia PDF Downloads 472073 Investigating the Impact of Job-Related and Organisational Factors on Employee Engagement: An Emotionally Relevant Approach Based on Psychological Climate and Organisational Emotional Intelligence (OEI)
Authors: Nuno Da Camara, Victor Dulewicz, Malcolm Higgs
Abstract:
Factors on employee engagement: In particular, although theorists have described the critical role of emotional cognition of the workplace environment as antecedents to employee engagement, empirical research on the impact of emotional cognition on employee engagement is limited. However, previous researchers have typically provided evidence of the link between emotional cognition of the workplace environment and workplace attitudes such as job satisfaction and organisational commitment. This study therefore aims to investigate the impact of emotional cognition of job, role, leader and organisation domains of the work environment – as represented by measures of psychological climate and organizational emotional intelligence (OEI) - on employee engagement. The research is based on a quantitative cross-sectional survey of employees in a UK charity organization (n=174). The research instruments applied include the psychological climate scale, the organisational emotional intelligence questionnaire (OEIQ) and the Utrecht Work Engagement Scale (UWES). The data were analysed using hierarchical regression and partial least squares (PLS) analytical techniques. The results of the study show that both psychological climate and OEI, which represent emotional cognition of job, role, leader and organisation domains in the workplace are significant drivers of employee engagement. In particular, the study found that a sense of contribution and challenge at work are the strongest drivers of vigour, dedication and absorption and highlights the importance of emotionally relevant approaches in furthering our understanding of workplace engagement.Keywords: employee engagement, organisational emotional intelligence, psychological climate, workplace attitudes
Procedia PDF Downloads 5052072 Development of an Interactive Display-Control Layout Design System for Trains Based on Train Drivers’ Mental Models
Authors: Hyeonkyeong Yang, Minseok Son, Taekbeom Yoo, Woojin Park
Abstract:
Human error is the most salient contributing factor to railway accidents. To reduce the frequency of human errors, many researchers and train designers have adopted ergonomic design principles for designing display-control layout in rail cab. There exist a number of approaches for designing the display control layout based on optimization methods. However, the ergonomically optimized layout design may not be the best design for train drivers, since the drivers have their own mental models based on their experiences. Consequently, the drivers may prefer the existing display-control layout design over the optimal design, and even show better driving performance using the existing design compared to that using the optimal design. Thus, in addition to ergonomic design principles, train drivers’ mental models also need to be considered for designing display-control layout in rail cab. This paper developed an ergonomic assessment system of display-control layout design, and an interactive layout design system that can generate design alternatives and calculate ergonomic assessment score in real-time. The design alternatives generated from the interactive layout design system may not include the optimal design from the ergonomics point of view. However, the system’s strength is that it considers train drivers’ mental models, which can help generate alternatives that are more friendly and easier to use for train drivers. Also, with the developed system, non-experts in ergonomics, such as train drivers, can refine the design alternatives and improve ergonomic assessment score in real-time.Keywords: display-control layout design, interactive layout design system, mental model, train drivers
Procedia PDF Downloads 3062071 Energy Efficiency and Sustainability Analytics for Reducing Carbon Emissions in Oil Refineries
Authors: Gaurav Kumar Sinha
Abstract:
The oil refining industry, significant in its energy consumption and carbon emissions, faces increasing pressure to reduce its environmental footprint. This article explores the application of energy efficiency and sustainability analytics as crucial tools for reducing carbon emissions in oil refineries. Through a comprehensive review of current practices and technologies, this study highlights innovative analytical approaches that can significantly enhance energy efficiency. We focus on the integration of advanced data analytics, including machine learning and predictive modeling, to optimize process controls and energy use. These technologies are examined for their potential to not only lower energy consumption but also reduce greenhouse gas emissions. Additionally, the article discusses the implementation of sustainability analytics to monitor and improve environmental performance across various operational facets of oil refineries. We explore case studies where predictive analytics have successfully identified opportunities for reducing energy use and emissions, providing a template for industry-wide application. The challenges associated with deploying these analytics, such as data integration and the need for skilled personnel, are also addressed. The paper concludes with strategic recommendations for oil refineries aiming to enhance their sustainability practices through the adoption of targeted analytics. By implementing these measures, refineries can achieve significant reductions in carbon emissions, aligning with global environmental goals and regulatory requirements.Keywords: energy efficiency, sustainability analytics, carbon emissions, oil refineries, data analytics, machine learning, predictive modeling, process optimization, greenhouse gas reduction, environmental performance
Procedia PDF Downloads 312070 Comparing the Effects of Ondansetron and Acupressure in PC6 Point on Postoperative Nausea and Vomiting in Patients Undergone Elective Cesarean Section: A Randomized Clinical Trial
Authors: Nasrin Galehdar, Sedigheh Nadri, Elham Nazari, Isan Darvishi, Abouzar Mohammadi
Abstract:
Background and aim:Nausea and vomiting are complications of cesarean section. The pharmacological and non-pharmacological approaches were applied to decrease postoperative nausea and vomiting. The aim of the present study was to compare the effects of Ondansetron and acupressure on postoperative nausea and vomiting in patients undergone an elective cesarean section. Materials and method: The study was designed as a randomized clinical trial. A total of 120 patients were allocated to two equal groups. Four mgs of Ondansetron was administered for the Ondansetron group after clamping the umbilical cord. The acupressure bracelets were fastened in the PC6 point for acupressure group for 15 minutes. The patients were monitored in terms of incidence, severity, and episodes of nausea and vomiting. The data obtained were analyzed by SPSS software version 18 with a significance level of 0.05. Results: There was no significant statistical difference in nausea severity among the groups intra-operatively, in the recovery and surgery wards. The incidence and episodes of vomiting were significantly higher in patients undergone acupressure intra-operatively, in the recovery and surgery wards (P< 0.05). No significant effect of acupressure was reported in reducing postoperative nausea and vomiting. Conclusion: No significant effect of acupressure was reported in reducing postoperative nausea and vomiting. Thus, it is suggested to perform the studies with larger size and comparing the effects of acupressure with other antiemetic medications.Keywords: ondansetron, acupressure, nausea, vomiting
Procedia PDF Downloads 1092069 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons
Authors: Kristopher A. Pruitt, Justin M. Hill
Abstract:
The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.Keywords: nutrition, optimization, pacing, ultramarathons
Procedia PDF Downloads 1892068 Analyzing the Implementation of Education for Sustainability: Focusing on Leadership Skills in Secondary School in Côte d'Ivoire
Authors: Elysee Guy Yohou
Abstract:
Côte d'Ivoire established a National Commission for Sustainable Development with a view to implementing the ESD. This study aims to understand the knowledge, attitude and practice about education for sustainability of teachers, students, principals, and staff in secondary schools in Côte d’Ivoire while exploring the barriers, levers and examines the leadership skills needed to help carrying out ESD. The data collection took place in October and December 2015. Questionnaires were administered to 400 participants, which involved teachers, students, principals and staff in 25 public and private secondary schools in four regional offices of education. 297 questionnaires were collected producing a collection-rate of 74.25%. Descriptive statistics, independent t-test, dependent sample t-test, One way ANOVA, Pearson correlation were used to analyze the data. Thereupon, knowledge, attitudes about education for sustainability of teachers, principals and staff in secondary school are better than students. However, there is little practice of ESD. 68.3% of participants are not familiar with the Decade of Education for Sustainable Development. In addition, 92.8% of schools do not have a school Agenda 21. The major barriers that prevent the teaching of education for sustainability are lack of access to technical tools, insufficient funding and lack of information. The main levers are teacher and staff training, financing, awareness of students, and public engagement. Principals do possess good human and technical skills but limited conceptual skills. The study showed that conceptual and human skills are convenient assets which rhyme more with education for sustainability. Thereupon, if schools’ principal need to improve education for sustainability through practice, they need more conceptual skills.Keywords: Côte d'Ivoire, education for sustainability, leadership skills, secondary school
Procedia PDF Downloads 1602067 Geosynthetic Reinforced Unpaved Road: Literature Study and Design Example
Authors: D. Jayalakshmi, S. S. Bhosale
Abstract:
This paper, in its first part, presents the state-of-the-art literature of design approaches for geosynthetic reinforced unpaved roads. The literature starting since 1970 and the critical appraisal of flexible pavement design by Giroud and Han (2004) and Jonathan Fannin (2006) is presented. The design example is illustrated for Indian conditions. The example emphasizes the results computed by Giroud and Han's (2004) design method with the Indian road congress guidelines by IRC SP 72 -2015. The input data considered are related to the subgrade soil condition of Maharashtra State in India. The unified soil classification of the subgrade soil is inorganic clay with high plasticity (CH), which is expansive with a California bearing ratio (CBR) of 2% to 3%. The example exhibits the unreinforced case and geotextile as reinforcement by varying the rut depth from 25 mm to 100 mm. The present result reveals the base thickness for the unreinforced case from the IRC design catalogs is in good agreement with Giroud and Han (2004) approach for a range of 75 mm to 100 mm rut depth. Since Giroud and Han (2004) method is applicable for both reinforced and unreinforced cases, for the same data with appropriate Nc factor, for the same rut depth, the base thickness for the reinforced case has arrived for the Indian condition. From this trial, for the CBR of 2%, the base thickness reduction due to geotextile inclusion is 35%. For the CBR range of 2% to 5% with different stiffness in geosynthetics, the reduction in base course thickness will be evaluated, and the validation will be executed by the full-scale accelerated pavement testing set up at the College of Engineering Pune (COE), India.Keywords: base thickness, design approach, equation, full scale accelerated pavement set up, Indian condition
Procedia PDF Downloads 1932066 An Integrative Model of Job Characteristics Key Attitudes and Intention to Leave Among Faculty in Higher Education
Authors: Bhavna Malik
Abstract:
The study is build on a theoretical framework that links characteristics of job, key attitudes and intention to leave, why faculty may be disengaging from institutional service. The literature indicates that job characteristics, key attitudes and intention to leave are very important for effective organizational functioning. In general, the literature showed that some job characteristics might be the antecedents of job satisfaction and the aggregate variable job scope was positively associated with organizational commitment, and these key attitudes predicted intention to leave negatively. The present study attempted to propose a new integrative model of the relationships among job characteristics, key attitudes, and intention to leave. The main purpose of the present study is to examine the effects of job characteristics on intention to leave. While examining the role of job characteristics, the mediating roles of key attitudes were taken into account in order to better understand how job characteristics affect the exhibition of intention to leave. The secondary purpose is to investigate the effects of job characteristics on key attitudes, and the effects of key attitudes on intention to leave. Job characteristics of remuneration, resource for professional activities, career opportunities were positively associated with the work attitude of job satisfaction. The aggregate job scope was positively associated with the work attitude of organizational commitment although no single job characteristic was significantly associated with organizational commitment. Commitment, however, did not significantly affect time spent on institutional service. Two job characteristics—time spent on research and time spent on teaching—were negatively associated with this behavior. In general, the literature showed that some job characteristics might be the antecedents of job satisfaction and the aggregate variable job scope was positively associated with organizational commitment, and these key attiudes predicted intention to leave negatively. In turn, job satisfaction and organizational commitment were negatively associated with the intention to leave. In addition to these, organizational commitment was negatively associated with the intention to leave. However, no significant direct association was found between job characteristics and intention to leave.Keywords: Job Characteristics Model, job satisfaction, organizational commitment, intention to leave
Procedia PDF Downloads 4912065 Yawning Computing Using Bayesian Networks
Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube
Abstract:
Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms
Procedia PDF Downloads 4552064 Facilitating Active Reading Strategies through Caps Chart to Foster Elementary EFL Learners’ Reading Skills and Reading Competency
Authors: Michelle Bulawan, Mei-Hua Chen
Abstract:
Reading comprehension is crucial for acquiring information, analyzing critically, and achieving academic proficiency. However, there is a lack of growth in reading comprehension skills beyond fourth grade. The developmental shift from "learning to read" to "reading to learn" occurs around this stage. Factual knowledge and diverse views in articles enhance reading comprehension abilities. Nevertheless, some face difficulties due to evolving textual requirements, such as expanding vocabulary and using longer, more complex terminology. Most research on reading strategies has been conducted at the tertiary and secondary levels, while few have focused on the elementary levels. Furthermore, the use of character, ask, problem, solution (CAPS) charts in teaching reading has also been hardly explored. Thus, the researcher decided to explore the facilitation of active reading strategies through the CAPS chart and address the following research questions: a) What differences existed in elementary EFL learners' reading competency among those who engaged in active reading strategies and those who did not? b) What are the learners’ metacognitive skills of those who engage in active reading strategies and those who do not, and what are their effects on their reading competency? c) For those participants who engage in active reading activities, what are their perceptions about incorporating active reading activities into their English classroom learning? Two groups of elementary EFL learners, each with 18 students of the same level of English proficiency, participated in this study. Group A served as the control group, while Group B served as the experimental group. Two teachers also participated in this research; one of them was the researcher who handled the experimental group. The treatment lasts for one whole semester or seventeen weeks. In addition to the CAPS chart, the researcher also used the metacognitive awareness of reading strategy inventory (MARSI) and a ten-item, five-point Likert scale survey.Keywords: active reading, EFL learners, metacognitive skills, reading competency, student’s perception
Procedia PDF Downloads 912063 Complicating Representations of Domestic Violence Perpetration through a Qualitative Content Analysis and Socio-Ecological Approach
Authors: Charlotte Lucke
Abstract:
This study contributes to the body of literature that analyzes and complicates oversimplified and sensationalized representations of trauma and violence through a close examination and complication of representations of perpetrators of domestic violence in the mass media. This study determines the ways the media frames perpetrators of domestic violence through a qualitative content analysis and socio-ecological approach to the perpetration of violence. While the qualitative analysis has not been carried out, through preliminary research, this study hypothesizes that the media represents perpetrators through tropes such as the 'predator' or 'offender,' or as a demonized 'other.' It is necessary to expose and work through such stereotypes because cultivation theory demonstrates that the mass media determines societal beliefs about and perceptions of the world. Thus, representations of domestic violence in the mass media can lead people to believe that perpetrators of violence are mere animals or criminals and overlook the trauma that many perpetrators experience. When the media represents perpetrators as pure evil, monsters, or absolute 'others,' it leaves out the complexities of what moves people to commit domestic violence. By analyzing and placing media representations of perpetrators into conversation with the socio-ecological approach to violence perpetration, this study complicates domestic violence stereotypes. The socio-ecological model allows researchers to consider the way the interplay between individuals and their families, friends, communities, and cultures can move people to act violently. Using this model, along with psychological and psychoanalytic approaches to the etiology of domestic violence, this paper argues that media stereotypes conceal the way people’s experiences of trauma, along with community and cultural norms, perpetuates the cycle of systemic trauma and violence in the home.Keywords: domestic violence, media images, representing trauma, theorising trauma
Procedia PDF Downloads 2392062 Enhance Indoor Environment in Buildings and Its Effect on Improving Occupant's Health
Authors: Imad M. Assali
Abstract:
Recently, the world main problem is a global warming and climate change affecting both outdoor and indoor environments, especially the air quality (AQ) as a result of vast migration of people from rural areas to urban areas. Therefore, cities became more crowded and denser from an irregular population increase, along with increasing urbanization caused many problems for the environment such as increasing the land prices, changes in life style, and the new buildings are not adapted to the climate producing uncomfortable and unhealthy indoor building conditions. As interior environments are the places that create the most intimate relationship with the user. Consequently, the indoor environment quality (IEQ) for buildings became uncomfortable and unhealthy for its occupants. The symptoms commonly associated with poor indoor environment such as itchy, headache, fatigue, and respiratory complaints such as cough and congestion, etc. The symptoms tend to improve over time or even disappear when people are away from the building. Therefore, designing a healthy indoor environment to fulfill human needs is the main concern for architects and interior designer. However, this research explores how occupant expectations and environmental attitudes may influence occupant health and satisfaction within the context of the indoor environment. In doing so, it reviews and contributes to the methods and tools used to evaluate only the indoor environment quality (IEQ) components of building performance. Its main aim is to review the literature on indoor human comfort. This is followed by a review of previous papers published related to human comfort. Finally, this paper will provide possible approaches in design level of healthy buildings.Keywords: sustainable building, indoor environment quality (IEQ), occupant's health, active system, sick building syndrome (SBS)
Procedia PDF Downloads 3632061 Evaluating a Holistic Fitness Program Used by High Performance Athletes and Mass Participants
Authors: Peter Smolianov, Jed Smith, Lisa Chen, Steven Dion, Christopher Schoen, Jaclyn Norberg
Abstract:
This study evaluated the effectiveness of an experimental training program used to improve performance and health of competitive athletes and recreational sport participants. This holistic program integrated and advanced Eastern and Western methods of prolonging elite sports participation and enjoying lifelong fitness, particularly from China, India, Russia, and the United States. The program included outdoor, gym, and water training approaches focused on strengthening while stretching/decompressing and on full body activation-all in order to improve performance as well as treat and prevent common disorders and pains. The study observed and surveyed over 100 users of the program including recreational fitness and sports enthusiasts as well as elite athletes who competed for national teams of different countries and for Division I teams of National Collegiate Athletic Association in the United States. Different types of sport were studied, including territorial games (e.g., American football, basketball, volleyball), endurance/cyclical (athletics/track and field, swimming), and artistic (e.g., gymnastics and synchronized swimming). Results of the study showed positive effects on the participants’ performance and health, particularly for those who used the program for more than two years and especially in reducing spinal disorders and in enabling to perform new training tasks which previously caused back pain.Keywords: lifelong fitness, injury prevention, prolonging sport participation, improving performance and health
Procedia PDF Downloads 1552060 The Impact of Regulatory Changes on the Development of Mobile Medical Apps
Abstract:
Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.Keywords: agile, applications, FDA, medical, mobile, regulations, software engineering, standards
Procedia PDF Downloads 3602059 CAGE Questionnaire as a Screening Tool for Hazardous Drinking in an Acute Admissions Ward: Frequency of Application and Comparison with AUDIT-C Questionnaire
Authors: Ammar Ayad Issa Al-Rifaie, Zuhreya Muazu, Maysam Ali Abdulwahid, Dermot Gleeson
Abstract:
The aim of this audit was to examine the efficiency of alcohol history documentation and screening for hazardous drinkers at the Medical Admission Unit (MAU) of Northern General Hospital (NGH), Sheffield, to identify any potential for enhancing clinical practice. Data were collected from medical clerking sheets, ICE system and directly from 82 patients by three junior medical doctors using both CAGE questionnaire and AUDIT-C tool for newly admitted patients to MAU in NGH, in the period between January and March 2015. Alcohol consumption was documented in around two-third of the patient sample and this was documented fairly accurately by health care professionals. Some used subjective words such as 'social drinking' in the alcohol units’ section of the history. CAGE questionnaire was applied to only four patients and none of the patients had documented advice, education or referral to an alcohol liaison team. AUDIT-C tool had identified 30.4%, while CAGE 10.9%, of patients admitted to the NGH MAU as hazardous drinkers. The amount of alcohol the patient consumes positively correlated with the score of AUDIT-C (Pearson correlation 0.83). Re-audit is planned to be carried out after integrating AUDIT-C tool as labels in the notes and presenting a brief teaching session to junior doctors. Alcohol misuse screening is not adequately undertaken and no appropriate action is being offered to hazardous drinkers. CAGE questionnaire is poorly applied to patients and when satisfactory and adequately used has low sensitivity to detect hazardous drinkers in comparison with AUDIT-C tool. Re-audit of alcohol screening practice after introducing AUDIT-C tool in clerking sheets (as labels) is required to compare the findings and conclude the audit cycle.Keywords: alcohol screening, AUDIT-C, CAGE, hazardous drinking
Procedia PDF Downloads 4092058 Dispersion Effects in Waves Reflected by Lossy Conductors: The Optics vs. Electromagnetics Approach
Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda
Abstract:
The study of dispersion phenomena in electromagnetic waves reflected by conductors at infrared and lower frequencies is a topic which finds a number of applications. We aim to explain in this work what are the most relevant ones and how this phenomenon is modeled from both optics and electromagnetics points of view. We also explain here how the amplitude of an electromagnetic wave reflected by a lossy conductor could depend on both the frequency of the incident wave, as well as on the electrical properties of the conductor, and we illustrate this phenomenon with a practical example. The mathematical analysis made by a specialist in electromagnetics or a microwave engineer is apparently very different from the one made by a specialist in optics. We show here how both approaches lead to the same physical result and what are the key concepts which enable one to understand that despite the differences in the equations the solution to the problem happens to be the same. Our study starts with an analysis made by using the complex refractive index and the reflectance parameter. We show how this reflectance has a dependence with the square root of the frequency when the reflecting material is a good conductor, and the frequency of the wave is low enough. Then we analyze the same problem with a less known approach, which is based on the reflection coefficient of the electric field, a parameter that is most commonly used in electromagnetics and microwave engineering. In summary, this paper presents a mathematical study illustrated with a worked example which unifies the modeling of dispersion effects made by specialists in optics and the one made by specialists in electromagnetics. The main finding of this work is that it is possible to reproduce the dependence of the Fresnel reflectance with frequency from the intrinsic impedance of the reflecting media.Keywords: dispersion, electromagnetic waves, microwaves, optics
Procedia PDF Downloads 1292057 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms
Authors: Francisco M. Silva
Abstract:
Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare
Procedia PDF Downloads 1262056 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.Keywords: image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization
Procedia PDF Downloads 1332055 Dual Drug Piperine-Paclitaxel Nanoparticles Inhibit Migration and Invasion in Human Breast Cancer Cells
Authors: Monika Verma, Renuka Sharma, B. R. Gulati, Namita Singh
Abstract:
In combination therapy, two chemotherapeutic agents work together in a collaborative action. It has appeared as one of the promising approaches to improve anti-cancer treatment efficacy. In the present investigation, piperine (P-NPS), paclitaxel (PTX NPS), and a combination of both, piperine-paclitaxel nanoparticle (Pip-PTX NPS), were made by the nanoprecipitation method and later characterized by PSA, DSC, SEM, TEM, and FTIR. All nanoparticles exhibited a monodispersed size distribution with a size of below 200 nm, zeta potential ranges from (-30-40mV) and a narrow polydispersity index (>0.3) of the drugs. The average encapsulation efficiency was found to be between 80 and 90%. In vitro release of drugs for nanoparticles was done spectrophotometrically. FTIR and DSC results confirmed the presence of the drug. The Pip-PTX NPS significantly inhibit cell proliferation as compared to the native drugs nanoparticles in the breast cancer cell line MCF-7. In addition, Pip-PTX NPS suppresses cells in colony formation and soft gel agar assay. Scratch migration and Transwell chamber invasion assays revealed that combined nanoparticles reduce the migration and invasion of breast cancer cells. Morphological studies showed that Pip-PTX NPS penetrates the cells and induces apoptosis, which was further confirmed by DNA fragmentation, SEM, and western blot analysis. Taken together, Pip-PTX NPS inhibits cell proliferation, anchorage dependent and anchorage independent cell growth, reduces migration and invasion, and induces apoptosis in cells. These findings support that combination therapy using Pip-PTX NPS represents a potential approach and could be helpful in the future for breast cancer therapy.Keywords: piperine, paclitaxel, breast cancer, apoptosis
Procedia PDF Downloads 1012054 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 3262053 Virtual Reality and Avatars in Education
Authors: Michael Brazley
Abstract:
Virtual Reality (VR) and 3D videos are the most current generation of learning technology today. Virtual Reality and 3D videos are being used in professional offices and Schools now for marketing and education. Technology in the field of design has progress from two dimensional drawings to 3D models, using computers and sophisticated software. Virtual Reality is being used as collaborative means to allow designers and others to meet and communicate inside models or VR platforms using avatars. This research proposes to teach students from different backgrounds how to take a digital model into a 3D video, then into VR, and finally VR with multiple avatars communicating with each other in real time. The next step would be to develop the model where people from three or more different locations can meet as avatars in real time, in the same model and talk to each other. This research is longitudinal, studying the use of 3D videos in graduate design and Virtual Reality in XR (Extended Reality) courses. The research methodology is a combination of quantitative and qualitative methods. The qualitative methods begin with the literature review and case studies. The quantitative methods come by way of student’s 3D videos, survey, and Extended Reality (XR) course work. The end product is to develop a VR platform with multiple avatars being able to communicate in real time. This research is important because it will allow multiple users to remotely enter your model or VR platform from any location in the world and effectively communicate in real time. This research will lead to improved learning and training using Virtual Reality and Avatars; and is generalizable because most Colleges, Universities, and many citizens own VR equipment and computer labs. This research did produce a VR platform with multiple avatars having the ability to move and speak to each other in real time. Major implications of the research include but not limited to improved: learning, teaching, communication, marketing, designing, planning, etc. Both hardware and software played a major role in project success.Keywords: virtual reality, avatars, education, XR
Procedia PDF Downloads 982052 Evaluating Traffic Congestion Using the Bayesian Dirichlet Process Mixture of Generalized Linear Models
Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig
Abstract:
This study applied traffic speed and occupancy to develop clustering models that identify different traffic conditions. Particularly, these models are based on the Dirichlet Process Mixture of Generalized Linear regression (DML) and change-point regression (CR). The model frameworks were implemented using 2015 historical traffic data aggregated at a 15-minute interval from an Interstate 295 freeway in Jacksonville, Florida. Using the deviance information criterion (DIC) to identify the appropriate number of mixture components, three traffic states were identified as free-flow, transitional, and congested condition. Results of the DML revealed that traffic occupancy is statistically significant in influencing the reduction of traffic speed in each of the identified states. Influence on the free-flow and the congested state was estimated to be higher than the transitional flow condition in both evening and morning peak periods. Estimation of the critical speed threshold using CR revealed that 47 mph and 48 mph are speed thresholds for congested and transitional traffic condition during the morning peak hours and evening peak hours, respectively. Free-flow speed thresholds for morning and evening peak hours were estimated at 64 mph and 66 mph, respectively. The proposed approaches will facilitate accurate detection and prediction of traffic congestion for developing effective countermeasures.Keywords: traffic congestion, multistate speed distribution, traffic occupancy, Dirichlet process mixtures of generalized linear model, Bayesian change-point detection
Procedia PDF Downloads 294