Search results for: evaluation framework
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11130

Search results for: evaluation framework

480 Exploring Valproic Acid (VPA) Analogues Interactions with HDAC8 Involved in VPA Mediated Teratogenicity: A Toxicoinformatics Analysis

Authors: Sakshi Piplani, Ajit Kumar

Abstract:

Valproic acid (VPA) is the first synthetic therapeutic agent used to treat epileptic disorders, which account for affecting nearly 1% world population. Teratogenicity caused by VPA has prompted the search for next generation drug with better efficacy and lower side effects. Recent studies have posed HDAC8 as direct target of VPA that causes the teratogenic effect in foetus. We have employed molecular dynamics (MD) and docking simulations to understand the binding mode of VPA and their analogues onto HDAC8. A total of twenty 3D-structures of human HDAC8 isoforms were selected using BLAST-P search against PDB. Multiple sequence alignment was carried out using ClustalW and PDB-3F07 having least missing and mutated regions was selected for study. The missing residues of loop region were constructed using MODELLER and energy was minimized. A set of 216 structural analogues (>90% identity) of VPA were obtained from Pubchem and ZINC database and their energy was optimized with Chemsketch software using 3-D CHARMM-type force field. Four major neurotransmitters (GABAt, SSADH, α-KGDH, GAD) involved in anticonvulsant activity were docked with VPA and its analogues. Out of 216 analogues, 75 were selected on the basis of lower binding energy and inhibition constant as compared to VPA, thus predicted to have anti-convulsant activity. Selected hHDAC8 structure was then subjected to MD Simulation using licenced version YASARA with AMBER99SB force field. The structure was solvated in rectangular box of TIP3P. The simulation was carried out with periodic boundary conditions and electrostatic interactions and treated with Particle mesh Ewald algorithm. pH of system was set to 7.4, temperature 323K and pressure 1atm respectively. Simulation snapshots were stored every 25ps. The MD simulation was carried out for 20ns and pdb file of HDAC8 structure was saved every 2ns. The structures were analysed using castP and UCSF Chimera and most stabilized structure (20ns) was used for docking study. Molecular docking of 75 selected VPA-analogues with PDB-3F07 was performed using AUTODOCK4.2.6. Lamarckian Genetic Algorithm was used to generate conformations of docked ligand and structure. The docking study revealed that VPA and its analogues have more affinity towards ‘hydrophobic active site channel’, due to its hydrophobic properties and allows VPA and their analogues to take part in van der Waal interactions with TYR24, HIS42, VAL41, TYR20, SER138, TRP137 while TRP137 and SER138 showed hydrogen bonding interaction with VPA-analogues. 14 analogues showed better binding affinity than VPA. ADMET SAR server was used to predict the ADMET properties of selected VPA analogues for predicting their druggability. On the basis of ADMET screening, 09 molecules were selected and are being used for in-vivo evaluation using Danio rerio model.

Keywords: HDAC8, docking, molecular dynamics simulation, valproic acid

Procedia PDF Downloads 255
479 Research on Reminiscence Therapy Game Design

Authors: Web Huei Chou, Li Yi Chun, Wenwe Yu, Han Teng Weng, H. Yuan, T. Yang

Abstract:

The prevalence of dementia is estimated to rise to 78 million by 2030 and 139 million by 2050. Among those affected, Alzheimer's disease is the most common form of dementia, contributing to 60–70% of cases. Addressing this growing challenge is crucial, especially considering the impact on older individuals and their caregivers. To reduce the behavioral and psychological symptoms of dementia, people with dementia use a variety of pharmaceutical and non-pharmacological treatments, and some studies have found the use of non-pharmacological interventions. Treatment of depression, cognitive function, and social activities has potential benefits. Butler developed reminiscence therapy as a method of treating dementia. Through ‘life review,’ individuals can recall their past events, activities, and experiences, which can reduce the depression of the elderly and improve their Quality of life to help give meaning to their lives and help them live independently. The life review process uses a variety of memory triggers, such as household items, past objects, photos, and music, and can be conducted collectively or individually and structured or unstructured. However, despite the advantages of nostalgia therapy, past research has always pointed out that current research lacks rigorous experimental evaluation and cannot describe clear research results and generalizability. Therefore, this study aims to study physiological sensing experiments to find a feasible experimental and verification method to provide clearer design and design specifications for reminiscence therapy and to provide a more widespread application for healthy aging. This study is an ongoing research project, a collaboration between the School of Design at Yunlin University of Science and Technology in Taiwan and the Department of Medical Engineering at Chiba University in Japan. We use traditional rice dishes from Taiwan and Japan as nostalgic content to construct a narrative structure for the elderly in the two countries respectively for life review activities, providing an easy-to-carry nostalgic therapy game with an intuitive interactive design. This experiment is expected to be completed in 36 months. The design team constructed and designed the game after conducting literary and historical data surveys and interviews with elders to confirm the nostalgic historical data in Taiwan and Japan. The Japanese team planned the Electrodermal Activity (EDA) and Blood Volume Pulse (BVP) experimental environments and Data calculation model, and then after conducting experiments on elderly people in two places, the research results were analyzed and discussed together. The research has completed the first 24 months of pre-study, design work, and pre-study and has entered the project acceptance stage.

Keywords: reminiscence therapy, aging health, design research, life review

Procedia PDF Downloads 34
478 Evaluation of Requests and Outcomes of Magnetic Resonance Imaging Assessing for Cauda Equina Syndrome at a UK Trauma Centre

Authors: Chris Cadman, Marcel Strauss

Abstract:

Background: In 2020, the University Hospital Wishaw in the United Kingdom became the centre for trauma and orthopaedics within its health board. This resulted in the majority of patients with suspected cauda equina syndrome (CES) being assessed and imaged at this site, putting an increased demand on MR imaging and displacing other previous activity. Following this transition, imaging requests for CES did not always follow national guidelines and would often be missing important clinical and safety information. There also appeared to be a very low positive scan rate compared with previously reported studies. In an attempt to improve patient selection and reduce the burden of CES imaging at this site clinical audit was performed. Methods: A total of 250 consecutive patients imaged to assess for CES were evaluated. Patients had to have presented to either the emergency or orthopaedic department acutely with a presenting complaint of suspected CES. Patients were excluded if they were not admitted acutely or were assessed by other clinical specialities. In total, 233 patients were included. Requests were assessed for appropriate clinical history, accurate and complete clinical assessment and MRI safety information. Clinical assessment was allocated a score of 1-6 based on information relating to history of pain, level of pain, dermatomes/myotomes affected, peri-anal paraesthesia/anaesthesia, anal tone and post-void bladder volume with each element scoring one point. Images were assessed for positive findings of CES, acquired spinal stenosis or nerve root compression. Results: Overall, 73% of requests had a clear clinical history of CES. The urgency of the request for imaging was given in 23% of cases. The mean clinical assessment score was 3.7 out of a total of 6. Overall, 2% of scans were positive for CES, 29% had acquired spinal stenosis and 30% had nerve root compression. For patients with CES, 75% had acute neurological signs compared with 68% of the study population. CES patients had a mean clinical history score of 5.3 compared with 3.7 for the study population. Overall, 95% of requests had appropriate MRI safety information. Discussion: it study included 233 patients who underwent specialist assessment and referral for MR imaging for suspected CES. Despite the serious nature of this condition, a large proportion of imaging requests did not have a clear clinical query of CES and the level of urgency was not given, which could potentially lead to a delay in imaging and treatment. Clinical examination was often also incomplete, which can make triaging of patients presenting with similar symptoms challenging. The positive rate for CES was only 2%, much below other studies which had positive rates of 6–40% with a large meta-analysis finding a mean positive rate of 19%. These findings demonstrate an opportunity to improve the quality of imaging requests for suspected CES. This may help to improve patient selection for imaging and result in a positive rate for CES imaging that is more in line with other centres.

Keywords: cauda equina syndrome, acute back pain, MRI, spine

Procedia PDF Downloads 15
477 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies

Authors: Roberta Martino, Viviana Ventre

Abstract:

Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.

Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty

Procedia PDF Downloads 130
476 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 106
475 Interoperability of 505th Search and Rescue Group and the 205th Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment

Authors: Ryan C. Igama

Abstract:

The complexity of disaster risk reduction management paved the way for various innovations and approaches to mitigate the loss of lives and casualties during disaster-related situations. The efficiency of doing response operations during disasters relies on the timely and organized deployment of search, rescue and retrieval teams. Indeed, the assistance provided by the search, rescue, and retrieval teams during disaster operations is a critical service needed to further minimize the loss of lives and casualties. The Armed Forces of the Philippines was mandated to provide humanitarian assistance and disaster relief operations during calamities and disasters. Thus, this study “Interoperability of 505TH Search and Rescue Group and the 205TH Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment” was intended to provide substantial information to further strengthen and promote the capabilities of search and rescue operations in the Philippines. Further, this study also aims to assess the interoperability of the 505th Search and Rescue Group of the Philippine Air Force and the 205th Tactical Helicopter Wing Philippine Air Force. This study was undertaken covering the component units in the Philippine Air Force of the Armed Forces of the Philippines – specifically the 505th SRG and the 205th THW as the involved units who also acted as the respondents of the study. The qualitative approach was the mechanism utilized in the form of focused group discussions, key informant interviews, and documentary analysis as primary means to obtain the needed data for the study. Essentially, this study was geared towards the evaluation of the effectiveness of the interoperability of the two (2) involved PAF units during search and rescue operations. Further, it also delved into the identification of the impacts, gaps, and challenges confronted regarding interoperability as to training, equipment, and coordination mechanism vis-à-vis the needed measures for improvement, respectively. The result of the study regarding the interoperability of the two (2) PAF units during search and rescue operations showed that there was a duplication in terms of functions or tasks in HADR activities, specifically during the conduct of air rescue operations in situations like calamities. In addition, it was revealed that there was a lack of equipment and training for the personnel involved in search and rescue operations which is a vital element during calamity response activities. Based on the findings of the study, it was recommended that a strategic planning workshop/activity must be conducted regarding the duties and responsibilities of the personnel involved in the search and rescue operations to address the command and control and interoperability issues of these units. Additionally, the conduct of intensive HADR-related training for the personnel involved in search and rescue operations of the two (2) PAF Units must also be conducted so they can be more proficient in their skills and sustainably increase their knowledge of search and rescue scenarios, including the capabilities of the respective units. Lastly, the updating of existing doctrines or policies must be undertaken to adapt advancement to the evolving situations in search and rescue operations.

Keywords: interoperability, search and rescue capability, humanitarian assistance, disaster response

Procedia PDF Downloads 94
474 Monitoring the Effect of Doxorubicin Liposomal in VX2 Tumor Using Magnetic Resonance Imaging

Authors: Ren-Jy Ben, Jo-Chi Jao, Chiu-Ya Liao, Ya-Ru Tsai, Lain-Chyr Hwang, Po-Chou Chen

Abstract:

Cancer is still one of the serious diseases threatening the lives of human beings. How to have an early diagnosis and effective treatment for tumors is a very important issue. The animal carcinoma model can provide a simulation tool for the study of pathogenesis, biological characteristics and therapeutic effects. Recently, drug delivery systems have been rapidly developed to effectively improve the therapeutic effects. Liposome plays an increasingly important role in clinical diagnosis and therapy for delivering a pharmaceutic or contrast agent to the targeted sites. Liposome can be absorbed and excreted by the human body, and is well known that no harm to the human body. This study aimed to compare the therapeutic effects between encapsulated (doxorubicin liposomal, LipoDox) and un-encapsulated (doxorubicin, Dox) anti-tumor drugs using Magnetic Resonance Imaging (MRI). Twenty-four New Zealand rabbits implanted with VX2 carcinoma at left thigh were classified into three groups: control group (untreated), Dox-treated group and LipoDox-treated group, 8 rabbits for each group. MRI scans were performed three days after tumor implantation. A 1.5T GE Signa HDxt whole body MRI scanner with a high resolution knee coil was used in this study. After a 3-plane localizer scan was performed, Three-Dimensional (3D) Fast Spin Echo (FSE) T2-Weighted Images (T2WI) was used for tumor volumetric quantification. And Two-Dimensional (2D) spoiled gradient recalled echo (SPGR) dynamic Contrast-enhanced (DCE) MRI was used for tumor perfusion evaluation. DCE-MRI was designed to acquire four baseline images, followed by contrast agent Gd-DOTA injection through the ear vein of rabbits. Afterwards, a series of 32 images were acquired to observe the signals change over time in the tumor and muscle. The MRI scanning was scheduled on a weekly basis for a period of four weeks to observe the tumor progression longitudinally. The Dox and LipoDox treatments were prescribed 3 times in the first week immediately after VX2 tumor implantation. ImageJ was used to quantitate tumor volume and time course signal enhancement on DCE images. The changes of tumor size showed that the growth of VX2 tumors was effectively inhibited for both LipoDox-treated and Dox-treated groups. Furthermore, the tumor volume of LipoDox-treated group was significantly lower than that of Dox-treated group, which implies that LipoDox has better therapeutic effect than Dox. The signal intensity of LipoDox-treated group is significantly lower than that of the other two groups, which implies that targeted therapeutic drug remained in the tumor tissue. This study provides a radiation-free and non-invasive MRI method for therapeutic monitoring of targeted liposome on an animal tumor model.

Keywords: doxorubicin, dynamic contrast-enhanced MRI, lipodox, magnetic resonance imaging, VX2 tumor model

Procedia PDF Downloads 458
473 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock

Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer

Abstract:

Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.

Keywords: redensification, SME, urban development, wood building system

Procedia PDF Downloads 111
472 Newly Designed Ecological Task to Assess Cognitive Map Reading Ability: Behavioral Neuro-Anatomic Correlates of Mental Navigation

Authors: Igor Faulmann, Arnaud Saj, Roland Maurer

Abstract:

Spatial cognition consists in a plethora of high level cognitive abilities: among them, the ability to learn and to navigate in large scale environments is probably one of the most complex skills. Navigation is thought to rely on the ability to read a cognitive map, defined as an allocentric representation of ones environment. Those representations are of course intimately related to the two geometrical primitives of the environment: distance and direction. Also, many recent studies point to a predominant hippocampal and para-hippocampal role in spatial cognition, as well as in the more specific cluster of navigational skills. In a previous study in humans, we used a newly validated test assessing cognitive map processing by evaluating the ability to judge relative distances and directions: the CMRT (Cognitive Map Recall Test). This study identified in topographically disorientated patients (1) behavioral differences between the evaluation of distances and of directions, and (2) distinct causality patterns assessed via VLSM (i.e., distinct cerebral lesions cause distinct response patterns depending on the modality (distance vs direction questions). Thus, we hypothesized that: (1) if the CMRT really taps into the same resources as real navigation, there would be hippocampal, parahippocampal, and parietal activation, and (2) there exists underlying neuroanatomical and functional differences between the processing of this two modalities. Aiming toward a better understanding of the neuroanatomical correlates of the CMRT in humans, and more generally toward a better understanding of how the brain processes the cognitive map, we adapted the CMRT as an fMRI procedure. 23 healthy subjects (11 women, 12 men), all living in Geneva for at least 2 years, underwent the CMRT in fMRI. Results show, for distance and direction taken together, than the most active brain regions are the parietal, frontal and cerebellar parts. Additionally, and as expected, patterns of brain activation differ when comparing the two modalities. Furthermore, distance processing seems to rely more on parietal regions (compared to other brain regions in the same modality and also to direction). It is interesting to notice that no significant activity was observed in the hippocampal or parahippocampal areas. Direction processing seems to tap more into frontal and cerebellar brain regions (compared to other brain regions in the same modality and also to distance). Significant hippocampal and parahippocampal activity has been shown only in this modality. This results demonstrated a complex interaction of structures which are compatible with response patterns observed in other navigational tasks, thus showing that the CMRT taps at least partially into the same brain resources as real navigation. Additionally, differences between the processing of distances and directions leads to the conclusion that the human brain processes each modality distinctly. Further research should focus on the dynamics of this processing, allowing a clearer understanding between the two sub-processes.

Keywords: cognitive map, navigation, fMRI, spatial cognition

Procedia PDF Downloads 295
471 Active Learning Methods in Mathematics

Authors: Daniela Velichová

Abstract:

Plenty of ideas on how to adopt active learning methods in education are available nowadays. Mathematics is a subject where the active involvement of students is required in particular in order to achieve desirable results regarding sustainable knowledge and deep understanding. The present article is based on the outcomes of an Erasmus+ project DrIVE-MATH, that was aimed at developing a novel and integrated framework to teach maths classes in engineering courses at the university level. It is fundamental for students from the early years of their academic life to have agile minds. They must be prepared to adapt to their future working environments, where enterprises’ views are always evolving, where all collaborate in teams, and relations between peers are thought for the well-being of the whole - workers and company profit. This reality imposes new requirements on higher education in terms of adaptation of different pedagogical methods, such as project-based and active-learning methods used within the course curricula. Active learning methodologies are regarded as an effective way to prepare students to meet the challenges posed by enterprises and to help them in building critical thinking, analytic reasoning, and insight to the solved complex problems from different perspectives. Fostering learning-by-doing activities in the pedagogical process can help students to achieve learning independence, as they could acquire deeper conceptual understanding by experimenting with the abstract concept in a more interesting, useful, and meaningful way. Clear information about learning outcomes and goals might help students to take more responsibility for their learning results. Active learning methods implemented by the project team members in their teaching practice, eduScrum and Jigsaw in particular, proved to provide better scientific and soft skills support to students than classical teaching methods. EduScrum method enables teachers to generate a working environment that stimulates students' working habits and self-initiative as they become aware of their responsibilities within the team, their own acquired knowledge, and their abilities to solve problems independently, though in collaboration with other team members. This method enhances collaborative learning, as students are working in teams towards a common goal - knowledge acquisition, while they are interacting with each other and evaluated individually. Teams consisting of 4-5 students work together on a list of problems - sprint; each member is responsible for solving one of them, while the group leader – a master, is responsible for the whole team. A similar principle is behind the Jigsaw technique, where the classroom activity makes students dependent on each other to succeed. Students are divided into groups, and assignments are split into pieces, which need to be assembled by the whole group to complete the (Jigsaw) puzzle. In this paper, analysis of students’ perceptions concerning the achievement of deeper conceptual understanding in mathematics and the development of soft skills, such as self-motivation, critical thinking, flexibility, leadership, responsibility, teamwork, negotiation, and conflict management, is presented. Some new challenges are discussed as brought by introducing active learning methods in the basic mathematics courses. A few examples of sprints developed and used in teaching basic maths courses at technical universities are presented in addition.

Keywords: active learning methods, collaborative learning, conceptual understanding, eduScrum, Jigsaw, soft skills

Procedia PDF Downloads 55
470 Anthropogenic Impact on Surface and Groundwaters Quality in the Western Part of the River Nile, Elsaff Village, Giza

Authors: Mohamed Elkashouty, Mohamed Yehia, Ahmed Tawfuk

Abstract:

The study area is located in the southern part of Giza Governorate at both side of the Nile Valley. A combination of major and trace elements have been used to classify surface- and ground-waters in El Kurimat village, Egypt. The main purpose of the project is to investigate the surface-and ground-waters quality and hydrochemical evaluation. The situation is further complicated by contamination with lithogenic and anthropogenic (agricultural and sewage wastewaters) sources and low groundwater management strategies. The Quaternary aquifer consists of sands and gravels of Pleistocene age intercalated with clay lenses and overlain by silty clay aquitard (Holocene). The semi-pervious silty clay aquitard of the Holocene Nile sediments cover the Quaternary aquifer in most areas. The groundwater flows generally from southwest to northeast. To achieve this target, thirty five and seventy three samples were collected from surface– and ground-waters within summer and winter seasons 2009-2010). Total dissolved solids (TDS), cations, anions, NO2, NO3, PO4 , Al, Ba, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Zn, As, F, Sb, Se, Sn, Sr and V) were determined in water samples. Grain size analysis was achieved to eight soil samples and measured the organic matter percent in different fractions. The TDS concentration is high in Arab El Ein canal by lithogenic and anthropogenic sources. The average concentrations of TDS in the River Nile are 245 (summer) and 254 ppm (winter). NO3 content ranges from 1.7 to 12 mg/l (summer), while in winter it ranges from 0.4 to 2.4. Most of the toxic metal concentrations are below the drinking and irrigation guidelines except Mn, V, Cr, Al, and Fe, which are higher than the guidelines in some canals and drains. The TDS concentration in groundwater increases toward northeastern and northwestern part of the study area (i.e. toward limestone plateau). It is due to hydrogeological interconnection between Quaternary and Eocene aquifer (saline water), wastewater dump and recharge from wadi El Atfihi wastewater. There is a good match between the hydrogeology and the hydrogeochemistry. Total dissolved solid in groundwater increases toward southwestern part, may be due to hydrogeological interconnection between Quaternary and Eocene aquifer and leakage from agricultural waste water of El Mohut drain. Fe, Mn, Cr, Al, PO4 and NO3 concentrations are high due to anthropogenic sources, therefore they are unsuitable for drinking. The average concentration of Cr, Cu, Fe, Mn &Zn are higher in winter than those in summer due to winter drought. The organic matter content in soil are increases in the northeastern and southwestern part, with different fractions, sue to agricultural wastewaters. Reused of contaminated surface- and ground-waters samples by mixing with fresh water (By AquaChem) was estimated to increase the income per capita.

Keywords: surface water, groundwater, major ions, toxic metals

Procedia PDF Downloads 293
469 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 181
468 Gamification of eHealth Business Cases to Enhance Rich Learning Experience

Authors: Kari Björn

Abstract:

Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.

Keywords: engineering education, integrated curriculum, learning experience, learning outcomes

Procedia PDF Downloads 240
467 Using Machine Learning to Extract Patient Data from Non-standardized Sports Medicine Physician Notes

Authors: Thomas Q. Pan, Anika Basu, Chamith S. Rajapakse

Abstract:

Machine learning requires data that is categorized into features that models train on. This topic is important to the field of sports medicine due to the many tools it provides to physicians such as diagnosis support and risk assessment. Physician note that healthcare professionals take are usually unclean and not suitable for model training. The objective of this study was to develop and evaluate an advanced approach for extracting key features from sports medicine data without the need for extensive model training or data labeling. An LLM (Large Language Model) was given a narrative (Physician’s Notes) and prompted to extract four features (details about the patient). The narrative was found in a datasheet that contained six columns: Case Number, Validation Age, Validation Gender, Validation Diagnosis, Validation Body Part, and Narrative. The validation columns represent the accurate responses that the LLM attempts to output. With the given narrative, the LLM would output its response and extract the age, gender, diagnosis, and injured body part with each category taking up one line. The output would then be cleaned, matched, and added to new columns containing the extracted responses. Five ways of checking the accuracy were used: unclear count, substring comparison, LLM comparison, LLM re-check, and hand-evaluation. The unclear count essentially represented the extractions the LLM missed. This can be also understood as the recall score ([total - false negatives] over total). The rest of these correspond to the precision score ([total - false positives] over total). Substring comparison evaluated the validation (X) and extracted (Y) columns’ likeness by checking if X’s results were a substring of Y's findings and vice versa. LLM comparison directly asked an LLM if the X and Y’s results were similar. LLM Re-check prompted the LLM to see if the extracted results can be found in the narrative. Lastly, A selection of 1,000 random narratives was also selected and hand-evaluated to give an estimate of how well the LLM-based feature extraction model performed. With a selection of 10,000 narratives, the LLM-based approach had a recall score of roughly 98%. However, the precision scores of the substring comparison and LLM comparison models were around 72% and 76% respectively. The reason for these low figures is due to the minute differences between answers. For example, the ‘chest’ is a part of the ‘upper trunk’ however, these models cannot detect that. On the other hand, the LLM re-check and subset of hand-tested narratives showed a precision score of 96% and 95%. If this subset is used to extrapolate the possible outcome of the whole 10,000 narratives, the LLM-based approach would be strong in both precision and recall. These results indicated that an LLM-based feature extraction model could be a useful way for medical data in sports to be collected and analyzed by machine learning models. Wide use of this method could potentially increase the availability of data thus improving machine learning algorithms and supporting doctors with more enhanced tools.

Keywords: AI, LLM, ML, sports

Procedia PDF Downloads 12
466 Role of ASHA in Utilizing Maternal Health Care Services India, Evidences from National Rural Health Mission (NRHM)

Authors: Dolly Kumari, H. Lhungdim

Abstract:

Maternal health is one of the crucial health indicators for any country. 5th goal of Millennium Development Goals is also emphasising on improvement of maternal health. Soon after Independence government of India realizing the importance of maternal and child health care services, and took steps to strengthen in 1st and 2nd five year plans. In past decade the other health indicator which is life expectancy at birth has been observed remarkable improvement. But still maternal mortality is high in India and in some states it is observe much higher than national average. Government of India pour lots of fund and initiate National Rural Health Mission (NRHM) in 2005 to improve maternal health in country by providing affordable and accessible health care services. Accredited Social Heath Activist (ASHA) is one of the key components of the NRHM. Mainly ASHAs are selected female aged 25-45 years from village itself and accountable for the monitoring of maternal health care for the same village. ASHA are trained to works as an interface between the community and public health system. This study tries to assess the role of ASHA in utilizing maternal health care services and to see the level of awareness about benefits given under JSY scheme and utilization of those benefits by eligible women. For the study concurrent evaluation data from National Rural health Mission (NRHM), initiated by government of India in 2005 has been used. This study is based on 78205 currently married women from 70 different districts of India. Descriptive statistics, chi2 test and binary logistic regression have been used for analysis. The probability of institutional delivery increases by 2.03 times (p<0.001) while if ASHA arranged or helped in arranging transport facility the probability of institutional delivery is increased by 1.67 times (p<0.01) than if she is not arranging transport facility. Further if ASHA facilitated to get JSY card to the pregnant women probability of going for full ANC is increases by 1.36 times (p<0.05) than reference. However if ASHA discuses about institutional delivery and approaches to get register than probability of getting TT injection is 1.88 and 1.64 times (p<0.01) higher than that if she did not discus. Further, Probability of benefits from JSY schemes is 1.25 times (p<0.001) higher among women who get married after 18 years. The probability of benefits from JSY schemes is 1.25 times (p<0.001) higher among women who get married after 18 year of age than before 18 years, it is also 1.28 times (p<0.001) and 1.32 times (p<0.001) higher among women have 1 to 8 year of schooling and with 9 and above years of schooling respectively than the women who never attended school. Those women who are working have 1.13 times (p<0.001) higher probability of getting benefits from JSY scheme than not working women. Surprisingly women belongs to wealthiest quintile are .53times (P<0.001) less aware about JSY scheme. Results conclude that work done by ASHA has great influence on maternal health care utilization in India. But results also show that still substantial numbers of needed population are far from utilization of these services. Place of delivery is significantly influenced by referral and transport facility arranged by ASHA.

Keywords: institutional delivery, JSY beneficiaries, referral faculty, public health

Procedia PDF Downloads 331
465 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic

Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak

Abstract:

Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.

Keywords: business value, financial ratios, performance measurement, value drivers

Procedia PDF Downloads 224
464 Academia as Creator of Emerging, Innovative Communities of Practice and Learning

Authors: Francisco Julio Batle Lorente

Abstract:

The present paper aims at presenting a new category of role for academia: proactive creator/promoter of communities of practice in emerging areas of innovation. It is based in research among practitioners in three different areas: social entrepreneurship, alumni engaged in entrepreneurship and innovation, and digital nomads. The concept of CoP is related to an intentionally created space to share experiences and collectively reflect on the cases arising from practice. Such an endeavour is not contemplated in the literature on academic roles in an explicit way. The goal of the paper is providing a framework for this function and throw some light on the perception and priorities of members of emerging communities (78 alumni, 154 social entrepreneurs, and 231 digital nomads) regarding community, learning, engagement, and networking, areas in which the university can help and, by doing so, contributing to signal the emerging area and creating new opportunities for the academia. The research methodology was based in Survey research. It is a specific type of field study that involves the collection of data from a sample of elements drawn from a well-defined population through the use of a questionnaire. It was considered that survey research might be valuable to the present project and help outline the utility of various study designs and future projects with the emerging communities that are the object of the investigation. Open questions were used for different topics, as well as critical incident technique. It was used a standard technique for survey sampling and questionnaire design. Finally, it was defined a procedure for pretesting questionnaires and for data collection. The questionnaire was channelled by means of google forms. The results indicate that the members of emerging, innovative CoPs and learning such the ones that were selected for this investigation lack cohesion, inspiration, networking, opportunities for creation of social capital, opportunities for collaboration beyond their existing and close network. The opportunity that arises for the academia from proactively helping articulate CoP (and Communities of learning) are related to key elements of any CoP/ CoL: community construction approaches, technological infrastructure, benefits, participation issues and urgent challenges, trust, networking, technical ability/training/development and collaboration. Beyond training, other three areas (networking, collaboration and urgent challenges) were the ones in which the contribution of universities to the communities were considered more interesting and workable to practitioners. The analysis of the responses for the open questions related to perception of the universities offer options for terra incognita to be explored for universities (signalling new areas, establishing broader collaborations with research, government, media and corporations, attracting investment). Based on the findings from this research, there is some evidence that CoPs can offer a formal and informal method of professional and interprofessional development for member of any emerging and innovative community and can decrease social and professional isolation. The opportunity that it offers to academia can increase the entrepreneurial and engaged university identity. It also moves to academia into a realm of civic confrontation of present and future challenges in a more proactive way.

Keywords: social innovation, new roles of academia, community of learning, community of practice

Procedia PDF Downloads 84
463 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 90
462 Intelligent Crop Circle: A Blockchain-Driven, IoT-Based, AI-Powered Sustainable Agriculture System

Authors: Mishak Rahul, Naveen Kumar, Bharath Kumar

Abstract:

Conceived as a high-end engine to revolutionise sustainable agri-food production, the intelligent crop circle (ICC) aims to incorporate the Internet of Things (IoT), blockchain technology and artificial intelligence (AI) to bolster resource efficiency and prevent waste, increase the volume of production and bring about sustainable solutions with long-term ecosystem conservation as the guiding principle. The operating principle of the ICC relies on bringing together multidisciplinary bottom-up collaborations between producers, researchers and consumers. Key elements of the framework include IoT-based smart sensors for sensing soil moisture, temperature, humidity, nutrient and air quality, which provide short-interval and timely data; blockchain technology for data storage on a private chain, which maintains data integrity, traceability and transparency; and AI-based predictive analysis, which actively predicts resource utilisation, plant growth and environment. This data and AI insights are built into the ICC platform, which uses the resulting DSS (Decision Support System) outlined as help in decision making, delivered through an easy-touse mobile app or web-based interface. Farmers are assumed to use such a decision-making aid behind the power of the logic informed by the data pool. Building on existing data available in the farm management systems, the ICC platform is easily interoperable with other IoT devices. ICC facilitates connections and information sharing in real-time between users, including farmers, researchers and industrial partners, enabling them to cooperate in farming innovation and knowledge exchange. Moreover, ICC supports sustainable practice in agriculture by integrating gamification techniques to stimulate farm adopters, deploying VR technologies to model and visualise 3D farm environments and farm conditions, framing the field scenarios using VR headsets and Real-Time 3D engines, and leveraging edge technologies to facilitate secure and fast communication and collaboration between users involved. And through allowing blockchain-based marketplaces, ICC offers traceability from farm to fork – that is: from producer to consumer. It empowers informed decision-making through tailor-made recommendations generated by means of AI-driven analysis and technology democratisation, enabling small-scale and resource-limited farmers to get their voice heard. It connects with traditional knowledge, brings together multi-stakeholder interactions as well as establishes a participatory ecosystem to incentivise continuous growth and development towards more sustainable agro-ecological food systems. This integrated approach leverages the power of emerging technologies to provide sustainable solutions for a resilient food system, ensuring sustainable agriculture worldwide.

Keywords: blockchain, internet of things, artificial intelligence, decision support system, virtual reality, gamification, traceability, sustainable agriculture

Procedia PDF Downloads 45
461 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 148
460 An Exploratory Study of Changing Organisational Practices of Third-Sector Organisations in Mandated Corporate Social Responsibility in India

Authors: Avadh Bihari

Abstract:

Corporate social responsibility (CSR) has become a global parameter to define corporates' ethical and responsible behaviour. It was a voluntary practice in India till 2013, driven by various guidelines, which has become a mandate since 2014 under the Companies Act, 2013. This has compelled the corporates to redesign their CSR strategies by bringing in structures, planning, accountability, and transparency in their processes with a mandate to 'comply or explain'. Based on the author's M.Phil. dissertation, this paper presents the changes in organisational practices and institutional mechanisms of third-sector organisations (TSOs) with the theoretical frameworks of institutionalism and co-optation. It became an interesting case as India is the only country to have a law on CSR, which is not only mandating the reporting but the spending too. The space of CSR in India is changing rapidly and affecting multiple institutions, in the context of the changing roles of the state, market, and TSOs. Several factors such as stringent regulation on foreign funding, mandatory CSR pushing corporates to look out for NGOs, and dependency of Indian NGOs on CSR funds have come to the fore almost simultaneously, which made it an important area of study. Further, the paper aims at addressing the gap in the literature on the effects of mandated CSR on the functioning of TSOs through the empirical and theoretical findings of this study. The author had adopted an interpretivist position in this study to explore changes in organisational practices from the participants' experiences. Data were collected through in-depth interviews with five corporate officials, eleven officials from six TSOs, and two academicians, located at Mumbai and Delhi, India. The findings of this study show the legislation has institutionalised CSR, and TSOs get co-opted in the process of implementing mandated CSR. Seventy percent of the corporates implement their CSR projects through TSOs in India; this has affected the organisational practices of TSOs to a large extent. They are compelled to recruit expert workforce, create new departments for monitoring & evaluation, communications, and adopt management practices of project implementation from corporates. These are attempts to institutionalise the TSOs so that they can produce calculated results as demanded by corporates. In this process, TSOs get co-opted in a struggle to secure funds and lose their autonomy. The normative, coercive, and mimetic isomorphisms of institutionalism come into play as corporates are mandated to take up CSR, thereby influencing the organisational practices of TSOs. These results suggest that corporates and TSOs require an understanding of each other's work culture to develop mutual respect and work towards the goal of sustainable development of the communities. Further, TSOs need to retain their autonomy and understanding of ground realities without which they become an extension of the corporate-funder. For a successful CSR project, engagement beyond funding is required from corporate, through their involvement and not interference. CSR-led community development can be structured by management practices to an extent, but cannot overshadow the knowledge and experience of TSOs.

Keywords: corporate social responsibility, institutionalism, organisational practices, third-sector organisations

Procedia PDF Downloads 116
459 The Role of Building Information Modeling as a Design Teaching Method in Architecture, Engineering and Construction Schools in Brazil

Authors: Aline V. Arroteia, Gustavo G. Do Amaral, Simone Z. Kikuti, Norberto C. S. Moura, Silvio B. Melhado

Abstract:

Despite the significant advances made by the construction industry in recent years, the crystalized absence of integration between the design and construction phases is still an evident and costly problem in building construction. Globally, the construction industry has sought to adopt collaborative practices through new technologies to mitigate impacts of this fragmented process and to optimize its production. In this new technological business environment, professionals are required to develop new methodologies based on the notion of collaboration and integration of information throughout the building lifecycle. This scenario also represents the industry’s reality in developing nations, and the increasing need for overall efficiency has demanded new educational alternatives at the undergraduate and post-graduate levels. In countries like Brazil, it is the common understanding that Architecture, Engineering and Building Construction educational programs are being required to review the traditional design pedagogical processes to promote a comprehensive notion about integration and simultaneity between the phases of the project. In this context, the coherent inclusion of computation design to all segments of the educational programs of construction related professionals represents a significant research topic that, in fact, can affect the industry practice. Thus, the main objective of the present study was to comparatively measure the effectiveness of the Building Information Modeling courses offered by the University of Sao Paulo, the most important academic institution in Brazil, at the Schools of Architecture and Civil Engineering and the courses offered in well recognized BIM research institutions, such as the School of Design in the College of Architecture of the Georgia Institute of Technology, USA, to evaluate the dissemination of BIM knowledge amongst students in post graduate level. The qualitative research methodology was developed based on the analysis of the program and activities proposed by two BIM courses offered in each of the above-mentioned institutions, which were used as case studies. The data collection instruments were a student questionnaire, semi-structured interviews, participatory evaluation and pedagogical practices. The found results have detected a broad heterogeneity of the students regarding their professional experience, hours dedicated to training, and especially in relation to their general knowledge of BIM technology and its applications. The research observed that BIM is mostly understood as an operational tool and not as methodological project development approach, relevant to the whole building life cycle. The present research offers in its conclusion an assessment about the importance of the incorporation of BIM, with efficiency and in its totality, as a teaching method in undergraduate and graduate courses in the Brazilian architecture, engineering and building construction schools.

Keywords: building information modeling (BIM), BIM education, BIM process, design teaching

Procedia PDF Downloads 155
458 Evaluating Impact of Teacher Professional Development Program on Students’ Learning

Authors: S. C. Lin, W. W. Cheng, M. S. Wu

Abstract:

This study attempted to investigate the connection between teacher professional development program and students’ Learning. This study took Readers’ Theater Teaching Program (RTTP) for professional development as an example to inquiry how participants apply their new knowledge and skills learned from RTTP to their teaching practice and how the impact influence students learning. The goals of the RTTP included: 1) to enhance teachers RT content knowledge; 2) to implement RT instruction in teachers’ classrooms in response to their professional development. 2) to improve students’ ability of reading fluency in professional development teachers’ classrooms. This study was a two-year project. The researchers applied mixed methods to conduct this study including qualitative inquiry and one-group pretest-posttest experimental design. In the first year, this study focused on designing and implementing RTTP and evaluating participants’ satisfaction of RTTP, what they learned and how they applied it to design their English reading curriculum. In the second year, the study adopted quasi-experimental design approach and evaluated how participants RT instruction influenced their students’ learning, including English knowledge, skill, and attitudes. The participants in this study composed two junior high school English teachers and their students. Data were collected from a number of different sources including teaching observation, semi-structured interviews, teaching diary, teachers’ professional development portfolio, Pre/post RT content knowledge tests, teacher survey, and students’ reading fluency tests. To analyze the data, both qualitative and quantitative data analysis were used. Qualitative data analysis included three stages: organizing data, coding data, and analyzing and interpreting data. Quantitative data analysis included descriptive analysis. The results indicated that average percentage of correct on pre-tests in RT content knowledge assessment was 40.75% with two teachers ranging in prior knowledge from 35% to 46% in specific RT content. Post-test RT content scores ranged from 70% to 82% correct with an average score of 76.50%. That gives teachers an average gain of 35.75% in overall content knowledge as measured by these pre/post exams. Teachers’ pre-test scores were lowest in script writing and highest in performing. Script writing was also the content area that showed the highest gains in content knowledge. Moreover, participants hold a positive attitude toward RTTP. They recommended that the approach of professional learning community, which was applied in RTTP was benefit to their professional development. Participants also applied the new skills and knowledge which they learned from RTTP to their practices. The evidences from this study indicated that RT English instruction significantly influenced students’ reading fluency and classroom climate. The result indicated that all of the experimental group students had a big progress in reading fluency after RT instruction. The study also found out several obstacles. Suggestions were also made.

Keywords: teacher’s professional development, program evaluation, readers’ theater, english reading instruction, english reading fluency

Procedia PDF Downloads 399
457 Application of Typha domingensis Pers. in Artificial Floating for Sewage Treatment

Authors: Tatiane Benvenuti, Fernando Hamerski, Alexandre Giacobbo, Andrea M. Bernardes, Marco A. S. Rodrigues

Abstract:

Population growth in urban areas has caused damages to the environment, a consequence of the uncontrolled dumping of domestic and industrial wastewater. The capacity of some plants to purify domestic and agricultural wastewater has been demonstrated by several studies. Since natural wetlands have the ability to transform, retain and remove nutrients, constructed wetlands have been used for wastewater treatment. They are widely recognized as an economical, efficient and environmentally acceptable means of treating many different types of wastewater. T. domingensis Pers. species have shown a good performance and low deployment cost to extract, detoxify and sequester pollutants. Constructed Floating Wetlands (CFWs) consist of emergent vegetation established upon a buoyant structure, floating on surface waters. The upper parts of the vegetation grow and remain primarily above the water level, while the roots extend down in the water column, developing an extensive under water-level root system. Thus, the vegetation grows hydroponically, performing direct nutrient uptake from the water column. Biofilm is attached on the roots and rhizomes, and as physical and biochemical processes take place, the system functions as a natural filter. The aim of this study is to diagnose the application of macrophytes in artificial floating in the treatment of domestic sewage in south Brazil. The T. domingensis Pers. plants were placed in a flotation system (polymer structure), in full scale, in a sewage treatment plant. The sewage feed rate was 67.4 m³.d⁻¹ ± 8.0, and the hydraulic retention time was 11.5 d ± 1.3. This CFW treat the sewage generated by 600 inhabitants, which corresponds to 12% of the population served by this municipal treatment plant. During 12 months, samples were collected every two weeks, in order to evaluate parameters as chemical oxygen demand (COD), biochemical oxygen demand in 5 days (BOD5), total Kjeldahl nitrogen (TKN), total phosphorus, total solids, and metals. The average removal of organic matter was around 55% for both COD and BOD5. For nutrients, TKN was reduced in 45.9% what was similar to the total phosphorus removal, while for total solids the reduction was 33%. For metals, aluminum, copper, and cadmium, besides in low concentrations, presented the highest percentage reduction, 82.7, 74.4 and 68.8% respectively. Chromium, iron, and manganese removal achieved values around 40-55%. The use of T. domingensis Pers. in artificial floating for sewage treatment is an effective and innovative alternative in Brazilian sewage treatment systems. The evaluation of additional parameters in the treatment system may give useful information in order to improve the removal efficiency and increase the quality of the water bodies.

Keywords: constructed wetland, floating system, sewage treatment, Typha domingensis Pers.

Procedia PDF Downloads 212
456 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 28
455 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 90
454 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.

Keywords: cruise ship, gas turbine, hull fouling, performance, propulsion, weather

Procedia PDF Downloads 165
453 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City

Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja

Abstract:

This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.

Keywords: NIHL, noise, sound level meter, tinnitus

Procedia PDF Downloads 204
452 Understanding Beginning Writers' Narrative Writing with a Multidimensional Assessment Approach

Authors: Huijing Wen, Daibao Guo

Abstract:

Writing is thought to be the most complex facet of language arts. Assessing writing is difficult and subjective, and there are few scientifically validated assessments exist. Research has proposed evaluating writing using a multidimensional approach, including both qualitative and quantitative measures of handwriting, spelling and prose. Given that narrative writing has historically been a staple of literacy instruction in primary grades and is one of the three major genres Common Core State Standards required students to acquire starting in kindergarten, it is essential for teachers to understand how to measure beginning writers writing development and sources of writing difficulties through narrative writing. Guided by the theoretical models of early written expression and using empirical data, this study examines ways teachers can enact a comprehensive approach to understanding beginning writer’s narrative writing through three writing rubrics developed for a Curriculum-based Measurement (CBM). The goal is to help classroom teachers structure a framework for assessing early writing in primary classrooms. Participants in this study included 380 first-grade students from 50 classrooms in 13 schools in three school districts in a Mid-Atlantic state. Three writing tests were used to assess first graders’ writing skills in relation to both transcription (i.e., handwriting fluency and spelling tests) and translational skills (i.e., a narrative prompt). First graders were asked to respond to a narrative prompt in 20 minutes. Grounded in theoretical models of earlier expression and empirical evidence of key contributors to early writing, all written samples to the narrative prompt were coded three ways for different dimensions of writing: length, quality, and genre elements. To measure the quality of the narrative writing, a traditional holistic rating rubric was developed by the researchers based on the CCSS and the general traits of good writing. Students' genre knowledge was measured by using a separate analytic rubric for narrative writing. Findings showed that first-graders had emerging and limited transcriptional and translational skills with a nascent knowledge of genre conventions. The findings of the study provided support for the Not-So-Simple View of Writing in that fluent written expression, measured by length and other important linguistic resources measured by the overall quality and genre knowledge rubrics, are fundamental in early writing development. Our study echoed previous research findings on children's narrative development. The study has practical classroom application as it informs writing instruction and assessment. It offered practical guidelines for classroom instruction by providing teachers with a better understanding of first graders' narrative writing skills and knowledge of genre conventions. Understanding students’ narrative writing provides teachers with more insights into specific strategies students might use during writing and their understanding of good narrative writing. Additionally, it is important for teachers to differentiate writing instruction given the individual differences shown by our multiple writing measures. Overall, the study shed light on beginning writers’ narrative writing, indicating the complexity of early writing development.

Keywords: writing assessment, early writing, beginning writers, transcriptional skills, translational skills, primary grades, simple view of writing, writing rubrics, curriculum-based measurement

Procedia PDF Downloads 77
451 Development of a One Health and Comparative Medicine Curriculum for Medical Students

Authors: Aliya Moreira, Blake Duffy, Sam Kosinski, Kate Heckman, Erika Steensma

Abstract:

Introduction: The One Health initiative promotes recognition of the interrelatedness between people, animals, plants, and their shared environment. The field of comparative medicine studies the similarities and differences between humans and animals for the purpose of advancing medical sciences. Currently, medical school education is narrowly focused on human anatomy and physiology, but as the COVID-19 pandemic has demonstrated, a holistic understanding of health requires comprehension of the interconnection between health and the lived environment. To prepare future physicians for unique challenges from emerging zoonoses to climate change, medical students can benefit from exposure to and experience with One Health and Comparative Medicine content. Methods: In January 2020, an elective course for medical students on One Health and Comparative Medicine was created to provide medical students with the background knowledge necessary to understand the applicability of animal and environmental health in medical research and practice. The 2-week course was continued in January 2021, with didactic and experiential activities taking place virtually due to the COVID-19 pandemic. In response to student feedback, lectures were added to expand instructional content on zoonotic and wildlife diseases for the second iteration of the course. Other didactic sessions included interprofessional lectures from 20 physicians, veterinarians, public health professionals, and basic science researchers. The first two cohorts of students were surveyed regarding One Health and Comparative Medicine concepts at the beginning and conclusion of the course. Results: 16 medical students have completed the comparative medicine course thus far, with 87.5% (n=14) completing pre-and post-course evaluations. 100% of student respondents indicated little to no exposure to comparative medicine or One Health concepts during medical school. Following the course, 100% of students felt familiar or very familiar with comparative medicine and One Health concepts. To assess course efficacy, questions were evaluated on a five-point Likert scale. 100% agreed or strongly agreed that learning Comparative Medicine and One Health topics augmented their medical education. 100% agreed or strongly agreed that a course covering this content should be regularly offered to medical students. Conclusions: Data from the student evaluation surveys demonstrate that the Comparative Medicine course was successful in increasing medical student knowledge of Comparative Medicine and One Health. Results also suggest that interprofessional training in One Health and Comparative Medicine is applicable and useful for medical trainees. Future iterations of this course could capitalize on the inherently interdisciplinary nature of these topics by enrolling students from veterinary and public health schools into a longitudinal course. Such recruitment may increase the course’s value by offering multidisciplinary student teams the opportunity to conduct research projects, thereby strengthening both the individual learning experience as well as sparking future interprofessional research ventures. Overall, these efforts to educate medical students in One Health topics should be reproducible at other institutions, preparing more future physicians for the diverse challenges they will encounter in practice.

Keywords: medical education, interprofessional instruction, one health, comparative medicine

Procedia PDF Downloads 111