Search results for: learning flow
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11413

Search results for: learning flow

403 Urban Green Transitioning in The Face of Current Global Change: The Management Role of the Local Government and Residents

Authors: Titilope F. Onaolapo, Christiana A. Breed, Maya Pasgaard, Kristine E. Jensen, Peta Brom

Abstract:

In the face of fast-growing urbanization in most of the world's developing countries, there is a need to understand and address the risk and consequences involved in the indiscriminate use of urban green space. Tshwane city in South Africa has the potential to become one of the world's top biodiversity cities as South Africa is ranked one of the mega countries in biodiversity conservation, and Tshwane metropolitan municipality is the city with the wealthiest biodiversity with grassland biomes. In this study, we focus on the potentials and challenges of urban green transitioning from the Global South perspective with Tshwane city as the case study. We also address the issue of management conflicts that have resulted in informal and illegal activities in and around green spaces, with consequences such as land degradation, loss of livelihoods and biodiversity, and socio-ecological imbalances. A desk study review of eight policy frameworks related to green urban planning and development was done based on four GI principles: multifunctionality, connectivity, interdisciplinary and social inclusion. We interviewed 15 key informants in related departments in the city and administered 200 survey questionnaires among residents. We also had several workshops the other researchers and experts on biodiversity and ecosystem. We found out there is no specific document dedicated to green space management, and where green infrastructure was mentioned, it was focused on as an approach to climate mitigation and adaptation. Also, residents perceive green and open spaces as extra land that could be developed at will. We demonstrated the use of collaborative learning approaches in ecological and development research and the tying research to the existing frameworks, programs, and strategies. Based on this understanding. We outlined the need to incorporate principles of green infrastructure in policy frameworks on spatial planning and environmental development. Furthermore, we develop a model for co-management of green infrastructures by stakeholders, such as residents, developers, policymakers, and decision-makers, to maximize benefits. Our collaborative, interdisciplinary projects pursue SDG multifunctionality of goals 11 and 15 by simultaneously addressing issues around Sustainable Cities and Communities, Climate Action, Life on Land, and Strong Institutions, and halt and reverse land degradation and biodiversity.

Keywords: governance, green infrastructure, South Africa, sustainable development, urban planning, Tshwane

Procedia PDF Downloads 96
402 Exploring the Contribution of Dynamic Capabilities to a Firm's Value Creation: The Role of Competitive Strategy

Authors: Mona Rashidirad, Hamid Salimian

Abstract:

Dynamic capabilities, as the most considerable capabilities of firms in the current fast-moving economy may not be sufficient for performance improvement, but their contribution to performance is undeniable. While much of the extant literature investigates the impact of dynamic capabilities on organisational performance, little attention has been devoted to understand whether and how dynamic capabilities create value. Dynamic capabilities as the mirror of competitive strategies should enable firms to search and seize new ideas, integrate and coordinate the firm’s resources and capabilities in order to create value. A careful investigation to the existing knowledge base remains us puzzled regarding the relationship among competitive strategies, dynamic capabilities and value creation. This study thus attempts to fill in this gap by empirically investigating the impact of dynamic capabilities on value creation and the mediating impact of competitive strategy on this relationship. We aim to contribute to dynamic capability view (DCV), in both theoretical and empirical senses, by exploring the impact of dynamic capabilities on firms’ value creation and whether competitive strategy can play any role in strengthening/weakening this relationship. Using a sample of 491 firms in the UK telecommunications market, the results demonstrate that dynamic sensing, learning, integrating and coordinating capabilities play a significant role in firm’s value creation, and competitive strategy mediates the impact of dynamic capabilities on value creation. Adopting DCV, this study investigates whether the value generating from dynamic capabilities depends on firms’ competitive strategy. This study argues a firm’s competitive strategy can mediate its ability to derive value from its dynamic capabilities and it explains the extent a firm’s competitive strategy may influence its value generation. The results of the dynamic capabilities-value relationships support our expectations and justify the non-financial value added of the four dynamic capability processes in a highly turbulent market, such as UK telecommunications. Our analytical findings of the relationship among dynamic capabilities, competitive strategy and value creation provide further evidence of the undeniable role of competitive strategy in deriving value from dynamic capabilities. The results reinforce the argument for the need to consider the mediating impact of organisational contextual factors, such as firm’s competitive strategy to examine how they interact with dynamic capabilities to deliver value. The findings of this study provide significant contributions to theory. Unlike some previous studies which conceptualise dynamic capabilities as a unidimensional construct, this study demonstrates the benefits of understanding the details of the link among the four types of dynamic capabilities, competitive strategy and value creation. In terms of contributions to managerial practices, this research draws attention to the importance of competitive strategy in conjunction with development and deployment of dynamic capabilities to create value. Managers are now equipped with solid empirical evidence which explains why DCV has become essential to firms in today’s business world.

Keywords: dynamic capabilities, resource based theory, value creation, competitive strategy

Procedia PDF Downloads 222
401 Followership Styles in the U.S. Hospitality Workforce: A Multi-Generational Comparison Study

Authors: Yinghua Huang, Tsu-Hong Yen

Abstract:

The latest advance in leadership research has revealed that leadership is co-created through the combined action of leading and following. The role of followers is as important as leaders in the leadership process. However, the previous leadership studies often conceptualize leadership as a leader-centric process, while the role of followers is largely neglected in the literature. Until recently, followership studies receives more attention because the character and behavior of followers are as vital as the leader during the leadership process. Yet, there is a dearth of followership research in the context of tourism and hospitality industries. Therefore, this study seeks to fill in the gap of knowledge and investigate the followership styles in the U.S. hospitality workforce. In particular, the objectives of this study are to identify popular followership practices among hospitality employees and evaluate hospitality employees' followership styles using Kelley’s followership typology framework. This study also compared the generational differences in followership styles among hospitality employees. According to the U.S. Bureau of Labor Statistics, the workforce in the lodging and foodservice sectors consists of around 12% baby boomers, 29% Gen Xs, 23% Gen Ys, and 36% Gen Zs in 2019. The diversity of workforce demographics in the U.S. hospitality industry calls for more attention to understand the generational differences in followership styles and organizational performance. This study conducted an in-depth interview and a questionnaire survey to collect both qualitative and quantitative data. A snowball sampling method was used to recruit participants working in the hospitality industry in the San Francisco Bay Area, California, USA. A total of 120 hospitality employees participated in this study, including 22 baby boomers, 32 Gen Xs, 30 Gen Ys, and 36 Gen Zs. 45% of the participants were males, and 55% were female. The findings of this study identified good followership practices across the multi-generational participants. For example, a Gen Y participant said that 'followership involves learning and molding oneself after another person usually an expert in an area of interest. I think of followership as personal and professional development. I learn and get better by hands-on training and experience'. A Gen X participant said that 'I can excel by not being fearful of taking on unfamiliar tasks and accepting challenges.' Furthermore, this study identified five typologies of Kelley’s followership model among the participants: 45% exemplary followers, 13% pragmatist followers, 2% alienated followers, 18% passive followers, and 23% conformist followers. The generational differences in followership styles were also identified. The findings of this study contribute to the hospitality human resource literature by identifying the multi-generational perspectives of followership styles among hospitality employees. The findings provide valuable insights for hospitality leaders to understand their followers better. Hospitality leaders were suggested to adjust their leadership style and communication strategies based on employees' different followership styles.

Keywords: followership, hospitality workforce, generational diversity, Kelley’s followership topology

Procedia PDF Downloads 108
400 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 262
399 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language

Authors: Tengku Sepora Tengku Mahadi

Abstract:

Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.

Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture

Procedia PDF Downloads 125
398 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid

Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero

Abstract:

The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).

Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment

Procedia PDF Downloads 122
397 A Deep Dive into the Multi-Pronged Nature of Student Engagement

Authors: Rosaline Govender, Shubnam Rambharos

Abstract:

Universities are, to a certain extent, the source of under-preparedness ideologically, structurally, and pedagogically, particularly since organizational cultures often alienate students by failing to enable epistemological access. This is evident in the unsustainably low graduation rates that characterize South African higher education, which indicate that under 30% graduate in minimum time, under two-thirds graduate within 6 years, and one-third have not graduated after 10 years. Although the statistics for the Faculty of Accounting and Informatics at the Durban University of Technology (DUT) in South Africa have improved significantly from 2019 to 2021, the graduation (32%), throughput (50%), and dropout rates (16%) are still a matter for concern as the graduation rates, in particular, are quite similar to the national statistics. For our students to succeed, higher education should take a multi-pronged approach to ensure student success, and student engagement is one of the ways to support our students. Student engagement depends not only on students’ teaching and learning experiences but, more importantly, on their social and academic integration, their sense of belonging, and their emotional connections in the institution. Such experiences need to challenge students academically and engage their intellect, grow their communication skills, build self-discipline, and promote confidence. The aim of this mixed methods study is to explore the multi-pronged nature of student success within the Faculty of Accounting and Informatics at DUT and focuses on the enabling and constraining factors of student success. The sources of data were the Mid-year student experience survey (N=60), the Hambisa Student Survey (N=85), and semi structured focus group interviews with first, second, and third year students of the Faculty of Accounting and Informatics Hambisa program. The Hambisa (“Moving forward”) focus area is part of the Siyaphumelela 2.0 project at DUT and seeks to understand the multiple challenges that are impacting student success which create a large “middle” cohort of students that are stuck in transition within academic programs. Using the lens of the sociocultural influences on student engagement framework, we conducted a thematic analysis of the two surveys and focus group interviews. Preliminary findings indicate that living conditions, choice of program, access to resources, motivation, institutional support, infrastructure, and pedagogical practices impact student engagement and, thus, student success. It is envisaged that the findings from this project will assist the university in being better prepared to enable student success.

Keywords: social and academic integration, socio-cultural influences, student engagement, student success

Procedia PDF Downloads 47
396 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 78
395 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task

Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes

Abstract:

For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.

Keywords: Alzheimer's disease, keystroke logging, matching, writing process

Procedia PDF Downloads 341
394 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations

Authors: Priyanka Bharti

Abstract:

Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.

Keywords: cognition, visual, decision making, graphics, recognition

Procedia PDF Downloads 248
393 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 189
392 Evaluation of a Driver Training Intervention for People on the Autism Spectrum: A Multi-Site Randomized Control Trial

Authors: P. Vindin, R. Cordier, N. J. Wilson, H. Lee

Abstract:

Engagement in community-based activities such as education, employment, and social relationships can improve the quality of life for individuals with Autism Spectrum Disorder (ASD). Community mobility is vital to attaining independence for individuals with ASD. Learning to drive and gaining a driver’s license is a critical link to community mobility; however, for individuals with ASD acquiring safe driving skills can be a challenging process. Issues related to anxiety, executive function, and social communication may affect driving behaviours. Driving training and education aimed at addressing barriers faced by learner drivers with ASD can help them improve their driving performance. A multi-site randomized controlled trial (RCT) was conducted to evaluate the effectiveness of an autism-specific driving training intervention for improving the on-road driving performance of learner drivers with ASD. The intervention was delivered via a training manual and interactive website consisting of five modules covering varying driving environments starting with a focus on off-road preparations and progressing through basic to complex driving skill mastery. Seventy-two learner drivers with ASD aged 16 to 35 were randomized using a blinded group allocation procedure into either the intervention or control group. The intervention group received 10 driving lessons with the instructors trained in the use of an autism-specific driving training protocol, whereas the control group received 10 driving lessons as usual. Learner drivers completed a pre- and post-observation drive using a standardized driving route to measure driving performance using the Driving Performance Checklist (DPC). They also completed anxiety, executive function, and social responsiveness measures. The findings showed that there were significant improvements in driving performance for both the intervention (d = 1.02) and the control group (d = 1.15). However, the differences were not significant between groups (p = 0.614) or study sites (p = 0.842). None of the potential moderator variables (anxiety, cognition, social responsiveness, and driving instructor experience) influenced driving performance. This study is an important step toward improving community mobility for individuals with ASD showing that an autism-specific driving training intervention can improve the driving performance of leaner drivers with ASD. It also highlighted the complexity of conducting a multi-site design even when sites were matched according to geography and traffic conditions. Driving instructors also need more and clearer information on how to communicate with learner drivers with restricted verbal expression.

Keywords: autism spectrum disorder, community mobility, driving training, transportation

Procedia PDF Downloads 106
391 21st Century Computer Technology for the Training of Early Childhood Teachers: A Study of Second-Year Education Students Challenged with Building a Kindergarten Website

Authors: Yonit Nissim, Eyal Weissblueth

Abstract:

This research is the continuation of a process that began in 2010 with the goal of redesigning the training program for future early childhood teachers at the Ohalo College, to integrate technology and provide 21st-century skills. The article focuses on a study of the processes involved in developing a special educational unit which challenged students with the task of designing, planning and building an internet site for kindergartens. This project was part of their second-year studies in the early childhood track of an interdisciplinary course entitled 'Educating for the Future.' The goal: enabling students to gain experience in developing an internet site specifically for kindergartens, and gain familiarity with Google platforms, the acquisition and use of innovative skills and the integration of technology in pedagogy. Research questions examined how students handled the task of building an internet site. The study explored whether the guided process of building a site helped them develop proficiency in creativity, teamwork, evaluation and learning appropriate to the 21st century. The research tool was a questionnaire constructed by the researchers and distributed online to the students. Answers were collected from 50-course participants. Analysis of the participants’ responses showed that, along with the significant experience and benefits that students gained from building a website for kindergarten, ambivalence was shown toward the use of new, unfamiliar and complex technology. This attitude was characterized by unease and initial emotional distress triggered by the departure from routine training to an island of uncertainty. A gradual change took place toward the adoption of innovation with the help of empathy, training, and guidance from the instructors, leading to the students’ success in carrying out the task. Initial success led to further successes, resulting in a quality product and a feeling of personal competency among the students. A clear and extreme emotional shift was observed on the spectrum from a sense of difficulty and dissatisfaction to feelings of satisfaction, joy, competency and cognitive understanding of the importance of facing a challenge and succeeding. The findings of this study can contribute to increased understanding of the complex training process of future kindergarten teachers, coping with a changing world, and pedagogy that is supported by technology.

Keywords: early childhood teachers, educating for the future, emotions, kindergarten website

Procedia PDF Downloads 129
390 Social Mentoring: Towards Formal and Informal Deployment in the Structures of the Social and Solidarity Economy

Authors: Vanessa Casadella, Mourad Chouki, Agnès Ceccarelli, Sofiane Tahi

Abstract:

Mentoring is positioned in an interpersonal and intergenerational perspective, serving the transmission of interpersonal skills and organizational culture. It echoes orientation, project, self-actualization, guidance, transmission, and filiation. It is available using a formal or informal approach. The formal dimension refers to a privileged relationship between a senior and a junior. Informal mentoring is unplanned and emerges naturally between two people who choose each other. However, it remains more difficult to understand. To study the link between formal and informal mentoring and to define the notion of “social” mentoring, we conducted a qualitative study of an exploratory nature with around ten SSE organizations located in the southeast region of Tunisia. The wealth of this territory has pushed residents to found SSE organizations with a view to creating jobs but also to preserving traditions and preserving nature. These organizations developed spontaneously to solve various local problems, such as the revitalization of deserted rural areas, environmental degradation, and the reskilling and professional reintegration of people marginalized in the labor market. This research, based on semi-structured interviews in order to obtain exhaustive and sensitive data, involves an interview guide with few questions mobilized to let the respondents, leaders of the different structures, express themselves freely. The guide includes questions on activities, methods of sharing knowledge, and difficulties in understanding between stakeholders. The interviews, lasting 30 to 60 minutes, were recorded using a dictaphone and then transcribed in full. The results are as follows: 1. We see two iterative mentoring loops. A first loop can be considered a type of formal mentoring. It highlights the support organized (in the form of training) by social enterprises with the aim of developing the autonomy, know-how, and interpersonal skills of members. A second loop concerns informal mentoring. This is non-formalized support provided by members or with other members of the entourage. This informal mentoring is mainly based on the observation of good practices and learning by doing. 2. We notice an intersection between the two loops. If the first loop is not done, the second will not take place. The knowledge acquired in the first loop is used to feed the second. 3. We note a form of reluctance on the part of some members to share their knowledge for reasons of competition. Ultimately, we retain the notion of “social” mentoring as a hybridization of formal and informal mentoring while dimensioning the “social” perspective by emphasizing the reciprocal character, solidarity, confidence, and trust between the mentor and the mentee.

Keywords: social innovation, social mentoring, social and solidarity economy, informal mentoring

Procedia PDF Downloads 36
389 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.

Keywords: catalyst, coal, liquefaction, temperature-staged

Procedia PDF Downloads 625
388 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries

Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman

Abstract:

There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.

Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems

Procedia PDF Downloads 126
387 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 66
386 Virtual Engineers on Wheels: Transitioning from Mobile to Online Outreach

Authors: Kauser Jahan, Jason Halvorsen, Kara Banks, Kara Natoli, Elizabeth McWeeney, Brittany LeMasney, Nicole Caramanna, Justin Hillman, Christopher Hauske, Meghan Sparks

Abstract:

The Virtual Engineers on Wheels (ViEW) is a revised version of our established mobile K-12 outreach program Engineers on Wheels in order to address the pandemic. The Virtual Engineers on Wheels' (VIEW) goal has stayed the same as in prior years: to provide K-12 students and educators with the necessary resources to peak interest in the expanding fields of engineering. With these trying times, the Virtual Engineers on Wheels outreach has adapted its medium of instruction to be more seamless with the online approach to teaching and outreach. In the midst of COVID-19, providing a safe transfer of information has become a constraint for research. The focus has become how to uphold a level of quality instruction without diminishing the safety of those involved by promoting proper health practices and giving hope to students as well as their families. Furthermore, ViEW has created resources on effective strategies that minimize risk factors of COVID-19 and inform families that there is still a promising future ahead. To obtain these goals while still maintaining true to the hands-on learning that is so crucial to young minds, the approach is online video lectures followed by experiments within different engineering disciplines. ViEW has created a comprehensive website that students can leverage to explore the different fields of study. One of the experiments entails teaching about drone usage and how it might play a factor in the future of unmanned deliveries. Some of the other experiments focus on the differences in mask materials and their effectiveness, as well as their environmental outlook. Having students perform from home enables them a safe environment to learn at their own pace while still providing quality instruction that would normally be achieved in the classroom. Contact information is readily available on the website to provide interested parties with a means to ask their inquiries. As it currently stands, the interest in engineering/STEM-related fields is underrepresented from women and certain minority groups. So alongside the desire to grow interest, helping balance the scales is one of the main priorities of VIEW. In previous years, VIEW surveyed students before and after instruction to see if their perception of engineering has changed. In general, it is the understanding that being exposed to engineering/STEM at a young age increases the chances that it will be pursued later in life.

Keywords: STEM, engineering outreach, teaching pedagogy, pandemic

Procedia PDF Downloads 102
385 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 102
384 The Usefulness and Usability of a Linkedin Group for the Maintenance of a Community of Practice among Hand Surgeons Worldwide

Authors: Vaikunthan Rajaratnam

Abstract:

Maintaining continuous professional development among clinicians has been a challenge. Hand surgery is a unique speciality with the coming together of orthopaedics, plastics and trauma surgeons. The requirements for a team-based approach to care with the inclusion of other experts such as occupational, physiotherapist and orthotic and prosthetist provide the impetus for the creation of communities of practice. This study analysed the community of practice in hand surgery that was created through a social networking website for professionals. The main objectives were to discover the usefulness of this community of practice created in the platform of the group function of LinkedIn. The second objective was to determine the usability of this platform for the purposes of continuing professional development among members of this community of practice. The methodology used was one of mixed methods which included a quantitative analysis on the usefulness of the social network website as a community of practice, using the analytics provided by the LinkedIn platform. Further qualitative analysis was performed on the various postings that were generated by the community of practice within the social network website. This was augmented by a respondent driven survey conducted online to assess the usefulness of the platform for continuous professional development. A total of 31 respondents were involved in this study. This study has shown that it is possible to create an engaging and interactive community of practice among hand surgeons using the group function of this professional social networking website LinkedIn. Over three years the group has grown significantly with members from multiple regions and has produced engaging and interactive conversations online. From the results of the respondents’ survey, it can be concluded that there was satisfaction of the functionality and that it was an excellent platform for discussions and collaboration in the community of practice with a 69 % of satisfaction. Case-based discussions were the most useful functions of the community of practice. This platform usability was graded as excellent using the validated usability tool. This study has shown that the social networking site LinkedIn’s group function can be easily used as a community of practice effectively and provides convenience to professionals and has made an impact on their practice and better care for patients. It has also shown that this platform was easy to use and has a high level of usability for the average healthcare professional. This platform provided the improved connectivity among professionals involved in hand surgery care which allowed for the community to grow and with proper support and contribution of relevant material by members allowed for a safe environment for the exchange of knowledge and sharing of experience that is the foundation of a community practice.

Keywords: community of practice, online community, hand surgery, lifelong learning, LinkedIn, social media, continuing professional development

Procedia PDF Downloads 293
383 Metadiscourse in EFL, ESP and Subject-Teaching Online Courses in Higher Education

Authors: Maria Antonietta Marongiu

Abstract:

Propositional information in discourse is made coherent, intelligible, and persuasive through metadiscourse. The linguistic and rhetorical choices that writers/speakers make to organize and negotiate content matter are intended to help relate a text to its context. Besides, they help the audience to connect to and interpret a text according to the values of a specific discourse community. Based on these assumptions, this work aims to analyse the use of metadiscourse in the spoken performance of teachers in online EFL, ESP, and subject-teacher courses taught in English to non-native learners in higher education. In point of fact, the global spread of Covid 19 has forced universities to transition their in-class courses to online delivery. This has inevitably placed on the instructor a heavier interactional responsibility compared to in-class courses. Accordingly, online delivery needs greater structuring as regards establishing the reader/listener’s resources for text understanding and negotiating. Indeed, in online as well as in in-class courses, lessons are social acts which take place in contexts where interlocutors, as members of a community, affect the ways ideas are presented and understood. Following Hyland’s Interactional Model of Metadiscourse (2005), this study intends to investigate Teacher Talk in online academic courses during the Covid 19 lock-down in Italy. The selected corpus includes the transcripts of online EFL and ESP courses and subject-teachers online courses taught in English. The objective of the investigation is, firstly, to ascertain the presence of metadiscourse in the form of interactive devices (to guide the listener through the text) and interactional features (to involve the listener in the subject). Previous research on metadiscourse in academic discourse, in college students' presentations in EAP (English for Academic Purposes) lessons, as well as in online teaching methodology courses and MOOC (Massive Open Online Courses) has shown that instructors use a vast array of metadiscoursal features intended to express the speakers’ intentions and standing with respect to discourse. Besides, they tend to use directions to orient their listeners and logical connectors referring to the structure of the text. Accordingly, the purpose of the investigation is also to find out whether metadiscourse is used as a rhetorical strategy by instructors to control, evaluate and negotiate the impact of the ongoing talk, and eventually to signal their attitudes towards the content and the audience. Thus, the use of metadiscourse can contribute to the informative and persuasive impact of discourse, and to the effectiveness of online communication, especially in learning contexts.

Keywords: discourse analysis, metadiscourse, online EFL and ESP teaching, rhetoric

Procedia PDF Downloads 107
382 India's Geothermal Energy Landscape and Role of Geophysical Methods in Unravelling Untapped Reserves

Authors: Satya Narayan

Abstract:

India, a rapidly growing economy with a burgeoning population, grapples with the dual challenge of meeting rising energy demands and reducing its carbon footprint. Geothermal energy, an often overlooked and underutilized renewable source, holds immense potential for addressing this challenge. Geothermal resources offer a valuable, consistent, and sustainable energy source, and may significantly contribute to India's energy. This paper discusses the importance of geothermal exploration in India, emphasizing its role in achieving sustainable energy production while mitigating environmental impacts. It also delves into the methodology employed to assess geothermal resource feasibility, including geophysical surveys and borehole drilling. The results and discussion sections highlight promising geothermal sites across India, illuminating the nation's vast geothermal potential. It detects potential geothermal reservoirs, characterizes subsurface structures, maps temperature gradients, monitors fluid flow, and estimates key reservoir parameters. Globally, geothermal energy falls into high and low enthalpy categories, with India mainly having low enthalpy resources, especially in hot springs. The northwestern Himalayan region boasts high-temperature geothermal resources due to geological factors. Promising sites, like Puga Valley, Chhumthang, and others, feature hot springs suitable for various applications. The Son-Narmada-Tapti lineament intersects regions rich in geological history, contributing to geothermal resources. Southern India, including the Godavari Valley, has thermal springs suitable for power generation. The Andaman-Nicobar region, linked to subduction and volcanic activity, holds high-temperature geothermal potential. Geophysical surveys, utilizing gravity, magnetic, seismic, magnetotelluric, and electrical resistivity techniques, offer vital information on subsurface conditions essential for detecting, evaluating, and exploiting geothermal resources. The gravity and magnetic methods map the depth of the mantle boundary (high-temperature) and later accurately determine the Curie depth. Electrical methods indicate the presence of subsurface fluids. Seismic surveys create detailed sub-surface images, revealing faults and fractures and establishing possible connections to aquifers. Borehole drilling is crucial for assessing geothermal parameters at different depths. Detailed geochemical analysis and geophysical surveys in Dholera, Gujarat, reveal untapped geothermal potential in India, aligning with renewable energy goals. In conclusion, geophysical surveys and borehole drilling play a pivotal role in economically viable geothermal site selection and feasibility assessments. With ongoing exploration and innovative technology, these surveys effectively minimize drilling risks, optimize borehole placement, aid in environmental impact evaluations, and facilitate remote resource exploration. Their cost-effectiveness informs decisions regarding geothermal resource location and extent, ultimately promoting sustainable energy and reducing India's reliance on conventional fossil fuels.

Keywords: geothermal resources, geophysical methods, exploration, exploitation

Procedia PDF Downloads 50
381 Surgical School Project: Implementation Educational Plan for Adolescents Awaiting Bariatric Surgery

Authors: Brooke Sweeney, David White, Felix Amparano, Nick A. Clark, Amy R. Beck, Mathew Lindquist, Lora Edwards, Julie Vandal, Jennifer Lisondra, Katie Cox, Renee Arensberg, Allen Cummins, Jazmine Cedeno, Jason D. Fraser, Kelsey Dean, Helena H. Laroche, Cristina Fernandez

Abstract:

Background: National organizations call for standardized pre-surgical requirements and education to optimize postoperative outcomes. Since 2017 our surgery program has used defined protocols and educational curricula pre- and post-surgery. In response to patient outcomes, our educational content was refined to include quizzes to assess patient knowledge and surgical preparedness. We aim to optimize adolescent pre-bariatric surgery preparedness by improving overall aggregate pre-surgical assessment performance from 68% to 80% within 12 months. Methods: A multidisciplinary improvement team was developed within the weight management clinic (WMC) of our tertiary care, free-standing children’s hospital. A manual has been utilized since 2017, with limitations in consistent delivery and patient uptake of information. The curriculum has been improved to include quizzes administered during WMC visits prior to bariatric surgery. The initial outcome measure is the pre-surgical quiz score of adolescents preparing for bariatric surgery. Process measure was the number of questions answered correctly to test the questions. Baseline performance was determined by a patient assessment survey of pre-surgical preparedness at patient visits. Plan-Do-Study-Act cycles (PDSA) included: 1) creation and implementation of a refined curriculum, 2) development of 5 new quizzes based upon learning objectives, and 3) improving provider-lead teaching and quiz administration within clinic workflow. Run charts assessed impact over time. Results: A total of 346 quiz questions were administered to 34 adolescents. The outcome measure improved from a baseline mean of 68% to 86% following PDSA 2 cycles, and it was sustained. Conclusion/Implication: Patient/family comprehension of surgical preparedness improved with standardized education via team member-led teaching and assessment using quizzes during pre-surgical clinic visits. The next steps include launching redesigned teaching materials with modules correlated to quizzes and assessment of comprehension and outcomes post-surgically.

Keywords: bariatric surgery, adolescent, clinic, pre-bariatric training

Procedia PDF Downloads 42
380 Creative Mathematics – Action Research of a Professional Development Program in an Icelandic Compulsory School

Authors: Osk Dagsdottir

Abstract:

Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—This study aims to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in the whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider the age group when analyzing the patients with lower limb disorders before any clinical treatment.

Keywords: action research, creative learning, mathematics education, professional development

Procedia PDF Downloads 89
379 Challenges for Competency-Based Learning Design in Primary School Mathematics in Mozambique

Authors: Satoshi Kusaka

Abstract:

The term ‘competency’ is attracting considerable scholarly attention worldwide with the advance of globalization in the 21st century and with the arrival of a knowledge-based society. In the current world environment, familiarity with varied disciplines is regarded to be vital for personal success. The idea of a competency-based educational system was mooted by the ‘Definition and Selection of Competencies (DeSeCo)’ project that was conducted by the Organization for Economic Cooperation and Development (OECD). Further, attention to this topic is not limited to developed countries; it can also be observed in developing countries. For instance, the importance of a competency-based curriculum was mentioned in the ‘2013 Harmonized Curriculum Framework for the East African Community’, which recommends key competencies that should be developed in primary schools. The introduction of such curricula and the reviews of programs are actively being executed, primarily in the East African Community but also in neighboring nations. Taking Mozambique as a case in point, the present paper examines the conception of ‘competency’ as a target of frontline education in developing countries. It also aims to discover the manner in which the syllabus, textbooks and lessons, among other things, in primary-level math education are developed and to determine the challenges faced in the process. This study employs the perspective of competency-based education design to analyze how the term ‘competency’ is defined in the primary-level math syllabus, how it is reflected in the textbooks, and how the lessons are actually developed. ‘Practical competency’ is mentioned in the syllabus, and the description of the term lays emphasis on learners' ability to interactively apply socio-cultural and technical tools, which is one of the key competencies that are advocated in OECD's ‘Definition and Selection of Competencies’ project. However, most of the content of the textbooks pertains to ‘basic academic ability’, and in actual classroom practice, teachers often impart lessons straight from the textbooks. It is clear that the aptitude of teachers and their classroom routines are greatly dependent on the cultivation of their own ‘practical competency’ as it is defined in the syllabus. In other words, there is great divergence between the ‘syllabus’, which is the intended curriculum, and the content of the ‘textbooks’. In fact, the material in the textbooks should serve as the bridge between the syllabus, which forms the guideline, and the lessons, which represent the ‘implemented curriculum’. Moreover, the results obtained from this investigation reveal that the problem can only be resolved through the cultivation of ‘practical competency’ in teachers, which is currently not sufficient.

Keywords: competency, curriculum, mathematics education, Mozambique

Procedia PDF Downloads 159
378 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 373
377 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 366
376 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination

Authors: Gilberto Goracci, Fabio Curti

Abstract:

This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.

Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field

Procedia PDF Downloads 78
375 Analytical Validity Of A Tech Transfer Solution To Internalize Genetic Testing

Authors: Lesley Northrop, Justin DeGrazia, Jessica Greenwood

Abstract:

ASPIRA Labs now offers an en-suit and ready-to-implement technology transfer solution to enable labs and hospitals that lack the resources to build it themselves to offer in-house genetic testing. This unique platform employs a patented Molecular Inversion Probe (MIP) technology that combines the specificity of a hybrid capture protocol with the ease of an amplicon-based protocol and utilizes an advanced bioinformatics analysis pipeline based on machine learning. To demonstrate its efficacy, two independent genetic tests were validated on this technology transfer platform: expanded carrier screening (ECS) and hereditary cancer testing (HC). The analytical performance of ECS and HC was validated separately in a blinded manner for calling three different types of variants: SNVs, short indels (typically, <50 bp), and large indels/CNVs defined as multi-exonic del/dup events. The reference set was constructed using samples from Coriell Institute, an external clinical genetic testing laboratory, Maine Molecular Quality Controls Inc. (MMQCI), SeraCare and GIAB Consortium. Overall, the analytical performance showed a sensitivity and specificity of >99.4% for both ECS and HC in detecting SNVs. For indels, both tests reported specificity of 100%, and ECS demonstrated a sensitivity of 100%, whereas HC exhibited a sensitivity of 96.5%. The bioinformatics pipeline also correctly called all reference CNV events resulting in a sensitivity of 100% for both tests. No additional calls were made in the HC panel, leading to a perfect performance (specificity and F-measure of 100%). In the carrier panel, however, three additional positive calls were made outside the reference set. Two of these calls were confirmed using an orthogonal method and were re-classified as true positives leaving only one false positive. The pipeline also correctly identified all challenging carrier statuses, such as positive cases for spinal muscular atrophy and alpha-thalassemia, resulting in 100% sensitivity. After confirmation of additional positive calls via long-range PCR and MLPA, specificity for such cases was estimated at 99%. These performance metrics demonstrate that this tech-transfer solution can be confidently internalized by clinical labs and hospitals to offer mainstream ECS and HC as part of their test catalog, substantially increasing access to quality germline genetic testing for labs of all sizes and resources levels.

Keywords: clinical genetics, genetic testing, molecular genetics, technology transfer

Procedia PDF Downloads 156
374 Secondary Prisonization and Mental Health: A Comparative Study with Elderly Parents of Prisoners Incarcerated in Remote Jails

Authors: Luixa Reizabal, Inaki Garcia, Eneko Sansinenea, Ainize Sarrionandia, Karmele Lopez De Ipina, Elsa Fernandez

Abstract:

Although the effects of incarceration in prisons close to prisoners’ and their families’ residences have been studied, little is known about the effects of remote incarceration. The present study shows the impact of secondary prisonization on mental health of elderly parents of Basque prisoners who are incarcerated in prisons located far away from prisoners’ and their families’ residences. Secondary prisonization refers to the effects that imprisonment of a family member has on relatives. In the study, psychological effects are analyzed by means of comparative methodology. Specifically, levels of psychopathology (depression, anxiety, and stress) and positive mental health (psychological, social, and emotional well-being) are studied in a sample of parents over 65 years old of prisoners incarcerated in prisons located a long distance away (concretely, some of them in a distance of less than 400 km, while others farther than 400 km) from the Basque Country. The dataset consists of data collected through a questionnaire and from a spontaneous speech recording. The statistical and automatic analyses show that levels of psychopathology and positive mental health of elderly parents of prisoners incarcerated in remote jails are affected by the incarceration of their sons or daughters. Concretely, these parents show higher levels of depression, anxiety, and stress and lower levels of emotional (but not psychological or social) wellbeing than parents with no imprisoned daughters or sons. These findings suggest that parents with imprisoned sons or daughters suffer the impact of secondary prisonization on their mental health. When comparing parents with sons or daughters incarcerated within 400 kilometers from home and parents whose sons or daughters are incarcerated farther than 400 kilometers from home, the latter present higher levels of psychopathology, but also higher levels of positive mental health (although the difference between the two groups is not statistically significant). These findings might be explained by resilience. In fact, in traumatic situations, people can develop a force to cope with the situation, and even present a posttraumatic growth. Bearing in mind all these findings, it could be concluded that secondary prisonization implies for elderly parents with sons or daughters incarcerated in remote jails suffering and, in consequence, that changes in the penitentiary policy applied to Basque prisoners are required in order to finish this suffering.

Keywords: automatic spontaneous speech analysis, elderly parents, machine learning, positive mental health, psychopathology, remote incarceration, secondary prisonization

Procedia PDF Downloads 256