Search results for: quality service risk
80 Geotechnical Challenges for the Use of Sand-sludge Mixtures in Covers for the Rehabilitation of Acid-Generating Mine Sites
Authors: Mamert Mbonimpa, Ousseynou Kanteye, Élysée Tshibangu Ngabu, Rachid Amrou, Abdelkabir Maqsoud, Tikou Belem
Abstract:
The management of mine wastes (waste rocks and tailings) containing sulphide minerals such as pyrite and pyrrhotite represents the main environmental challenge for the mining industry. Indeed, acid mine drainage (AMD) can be generated when these wastes are exposed to water and air. AMD is characterized by low pH and high concentrations of heavy metals, which are toxic to plants, animals, and humans. It affects the quality of the ecosystem through water and soil pollution. Different techniques involving soil materials can be used to control AMD generation, including impermeable covers (compacted clays) and oxygen barriers. The latter group includes covers with capillary barrier effects (CCBE), a multilayered cover that include the moisture retention layer playing the role of an oxygen barrier. Once AMD is produced at a mine site, it must be treated so that the final effluent at the mine site complies with regulations and can be discharged into the environment. Active neutralization with lime is one of the treatment methods used. This treatment produces sludge that is usually stored in sedimentation ponds. Other sludge management alternatives have been examined in recent years, including sludge co-disposal with tailings or waste rocks, disposal in underground mine excavations, and storage in technical landfill sites. Considering the ability of AMD neutralization sludge to maintain an alkaline to neutral pH for decades or even centuries, due to the excess alkalinity induced by residual lime within the sludge, valorization of sludge in specific applications could be an interesting management option. If done efficiently, the reuse of sludge could free up storage ponds and thus reduce the environmental impact. It should be noted that mixtures of sludge and soils could potentially constitute usable materials in CCBE for the rehabilitation of acid-generating mine sites, while sludge alone is not suitable for this purpose. The high sludge water content (up to 300%), even after sedimentation, can, however, constitute a geotechnical challenge. Adding lime to the mixtures can reduce the water content and improve the geotechnical properties. The objective of this paper is to investigate the impact of the sludge content (30, 40 and 50%) in sand-sludge mixtures (SSM) on their hydrogeotechnical properties (compaction, shrinkage behaviour, saturated hydraulic conductivity, and water retention curve). The impact of lime addition (dosages from 2% to 6%) on the moisture content, dry density after compaction and saturated hydraulic conductivity of SSM was also investigated. Results showed that sludge adding to sand significantly improves the saturated hydraulic conductivity and water retention capacity, but the shrinkage increased with sludge content. The dry density after compaction of lime-treated SSM increases with the lime dosage but remains lower than the optimal dry density of the untreated mixtures. The saturated hydraulic conductivity of lime-treated SSM after 24 hours of cure decreases by 3 orders of magnitude. Considering the hydrogeotechnical properties obtained with these mixtures, it would be possible to design CCBE whose moisture retention layer is made of SSM. Physical laboratory models confirmed the performance of such CCBE.Keywords: mine waste, AMD neutralization sludge, sand-sludge mixture, hydrogeotechnical properties, mine site reclamation, CCBE
Procedia PDF Downloads 5179 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study
Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang
Abstract:
Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media
Procedia PDF Downloads 8178 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements
Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker
Abstract:
Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.Keywords: adaptive, CAx, function blocks, turbomachinery
Procedia PDF Downloads 29677 Date Palm Wastes Turning into Biochars for Phosphorus Recovery from Aqueous Solutions: Static and Dynamic Investigations
Authors: Salah Jellali, Nusiba Suliman, Yassine Charabi, Jamal Al-Sabahi, Ahmed Al Raeesi, Malik Al-Wardy, Mejdi Jeguirim
Abstract:
Huge amounts of agricultural biomasses are worldwide produced. At the same time, large quantities of phosphorus are annually discharged into water bodies with possible serious effects onto the environment quality. The main objective of this work is to turn a local Omani biomass (date palm fronds wastes: DPFW) into an effective material for phosphorus recovery from aqueous and the reuse of this P-loaded material in agriculture as ecofriendly amendment. For this aim, the raw DPFW were firstly impregnated with 1 M salt separated solutions of CaCl₂, MgCl₂, FeCl₃, AlCl₃, and a mixture of MgCl₂/AlCl₃ for 24 h, and then pyrolyzed under N2 flow at 500 °C for 2 hours by using an adapted tubular furnace (Carbolite, UK). The synthetized biochars were deeply characterized through specific analyses concerning their morphology, structure, texture, and surface chemistry. These analyses included the use of a scanning electron microscope (SEM) coupled with an energy-dispersive X-Ray spectrometer (EDS), X-Ray diffraction (XRD), Fourier Transform Infrared (FTIR), sorption micrometrics, and X-ray Fluorescence (XRF) apparatus. Then, their efficiency in recovering phosphorus was investigated in batch mode for various contact times (1 min to 3 h), aqueous pH values (from 3 to 11), initial phosphorus concentrations (10-100 mg/L), presence of anions (nitrates, sulfates, and chlorides). In a second step, dynamic assays, by using laboratory columns (height of 30 cm and diameter of 3 cm), were performed in order to investigate the recovery of phosphorus by the modified biochar with a mixture of Mg/Al. The effect of the initial P concentration (25-100 mg/L), the bed depth height (3 to 8 g), and the flow rate (10-30 mL/min) was assessed. Experimental results showed that the biochars physico-chemical properties were very dependent on the type of the used modifying salt. The main affected parameters concerned the specific surface area, microporosity area, and the surface chemistry (pH of zero-point charge and available functional groups). These characteristics have significantly affected the phosphorus recovery efficiency from aqueous solutions. Indeed, the P removal efficiency in batch mode varies from about 5 mg/g for the Fe-modified biochar to more than 13 mg/g for the biochar functionalized with Mg/Al layered double hydroxides. Moreover, the P recovery seems to be a time dependent process and significantly affected by the pH of the aqueous media and the presence of foreign anions due to competition phenomenon. The laboratory column study of phosphorus recovery by the biochar functionalized with Mg/Al layered double hydroxides showed that this process is affected by the used phosphorus concentration, the flow rate, and especially the column bed depth height. Indeed, the phosphorus recovered amount increased from about 4.9 to more than 9.3 mg/g used biochar mass of 3 and 8 g, respectively. This work proved that salt-modified palm fronds-derived biochars could be considered as attractive and promising materials for phosphorus recovery from aqueous solutions even under dynamic conditions. The valorization of these P-loaded-modified biochars as eco-friendly amendment for agricultural soils is necessary will promote sustainability and circular economy concepts in the management of both liquid and solid wastes.Keywords: date palm wastes, Mg/Al double-layered hydroxides functionalized biochars, phosphorus, recovery, sustainability, circular economy
Procedia PDF Downloads 8076 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach
Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira
Abstract:
Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers
Procedia PDF Downloads 15575 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum
Authors: Divya Palaniappan
Abstract:
Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum
Procedia PDF Downloads 36274 Developing a Sustainable Transit Planning Index Using Analytical Hierarchy Process Method for ZEB Implementation in Canada
Authors: Mona Ghafouri-Azar, Sara Diamond, Jeremy Bowes, Grace Yuan, Aimee Burnett, Michelle Wyndham-West, Sara Wagner, Anand Pariyarath
Abstract:
Transportation is the fastest growing source of greenhouse gas emissions worldwide. In Canada, it is responsible for 23% of total CO2emissions from fuel combustion, and emissions from the transportation sector are the second largest source of emissions after the oil and gas sector. Currently, most Canadian public transportation systems rely on buses that operateon fossil fuels.Canada is currently investing billions of dollars to replacediesel buses with electric busesas this isperceived to have a significant impact on climate mitigation. This paper focuses on the possible impacts of zero emission buses (ZEB) on sustainable development, considering three dimensions of sustainability; environmental quality, economic growth, and social development.A sustainable transportation system is one that is safe, affordable, accessible, efficient, and resilient and that contributes minimal emissions of carbon and other pollutants.To enable implementation of these goals, relevant indicators were selected and defined that measure progress towards a sustainable transportation system. These were drawn from Canadian and international examples. Studies compare different European cities in terms of development, sustainability, and infrastructures, by using transport performance indicators. A Normalized Transport Sustainability index measures and compares policies in different urban areas and allows fine-tuning of policies. Analysts use a number ofmethods for sustainable analysis, like cost-benefit analysis (CBA) toassess economic benefit, life-cycle assessment (LCA) to assess social, economic, and environment factors and goals, and multi-criteria decision making (MCDM) analysis which can comparediffering stakeholder preferences.A multi criteria decision making approach is an appropriate methodology to plan and evaluate sustainable transit development and to provide insights and meaningful information for decision makers and transit agencies. It is essential to develop a system thataggregates specific discrete indices to assess the sustainability of transportation systems.Theseprioritize indicators appropriate for the differentCanadian transit system agencies and theirpreferences and requirements. This studywill develop an integrating index that alliesexistingdiscrete indexes to supporta reliable comparison between the current transportation system (diesel buses) and the new ZEB system emerging in Canada. As a first step, theindexes for each category are selected, and the index matrix constructed. Second, the selected indicators arenormalized to remove anyinconsistency between them. Next, the normalized matrix isweighted based on the relative importance of each index to the main domains of sustainability using the analytical hierarchy process (AHP) method. This is accomplished through expert judgement around the relative importance of different attributes with respect to the goals through apairwise comparison matrix. The considerationof multiple environmental, economic, and social factors (including equity and health) is integrated intoa sustainable transit planning index (STPI) which supportsrealistic ZEB implementation in Canada and beyond and is useful to different stakeholders, agencies, and ministries.Keywords: zero emission buses, sustainability, sustainable transit, transportation, analytical hierarchy process, environment, economy, social
Procedia PDF Downloads 12773 Unveiling the Dynamics of Preservice Teachers’ Engagement with Mathematical Modeling through Model Eliciting Activities: A Comprehensive Exploration of Acceptance and Resistance Towards Modeling and Its Pedagogy
Authors: Ozgul Kartal, Wade Tillett, Lyn D. English
Abstract:
Despite its global significance in curricula, mathematical modeling encounters persistent disparities in recognition and emphasis within regular mathematics classrooms and teacher education across countries with diverse educational and cultural traditions, including variations in the perceived role of mathematical modeling. Over the past two decades, increased attention has been given to the integration of mathematical modeling into national curriculum standards in the U.S. and other countries. Therefore, the mathematics education research community has dedicated significant efforts to investigate various aspects associated with the teaching and learning of mathematical modeling, primarily focusing on exploring the applicability of modeling in schools and assessing students', teachers', and preservice teachers' (PTs) competencies and engagement in modeling cycles and processes. However, limited attention has been directed toward examining potential resistance hindering teachers and PTs from effectively implementing mathematical modeling. This study focuses on how PTs, without prior modeling experience, resist and/or embrace mathematical modeling and its pedagogy as they learn about models and modeling perspectives, navigate the modeling process, design and implement their modeling activities and lesson plans, and experience the pedagogy enabling modeling. Model eliciting activities (MEAs) were employed due to their high potential to support the development of mathematical modeling pedagogy. The mathematical modeling module was integrated into a mathematics methods course to explore how PTs embraced or resisted mathematical modeling and its pedagogy. The module design included reading, reflecting, engaging in modeling, assessing models, creating a modeling task (MEA), and designing a modeling lesson employing an MEA. Twelve senior undergraduate students participated, and data collection involved video recordings, written prompts, lesson plans, and reflections. An open coding analysis revealed acceptance and resistance toward teaching mathematical modeling. The study identified four overarching themes, including both acceptance and resistance: pedagogy, affordance of modeling (tasks), modeling actions, and adjusting modeling. In the category of pedagogy, PTs displayed acceptance based on potential pedagogical benefits and resistance due to various concerns. The affordance of modeling (tasks) category emerged from instances when PTs showed acceptance or resistance while discussing the nature and quality of modeling tasks, often debating whether modeling is considered mathematics. PTs demonstrated both acceptance and resistance in their modeling actions, engaging in modeling cycles as students and designing/implementing MEAs as teachers. The adjusting modeling category captured instances where PTs accepted or resisted maintaining the qualities and nature of the modeling experience or converted modeling into a typical structured mathematics experience for students. While PTs displayed a mix of acceptance and resistance in their modeling actions, limitations were observed in embracing complexity and adhering to model principles. The study provides valuable insights into the challenges and opportunities of integrating mathematical modeling into teacher education, emphasizing the importance of addressing pedagogical concerns and providing support for effective implementation. In conclusion, this research offers a comprehensive understanding of PTs' engagement with modeling, advocating for a more focused discussion on the distinct nature and significance of mathematical modeling in the broader curriculum to establish a foundation for effective teacher education programs.Keywords: mathematical modeling, model eliciting activities, modeling pedagogy, secondary teacher education
Procedia PDF Downloads 6372 Multilocus Phylogenetic Approach Reveals Informative DNA Barcodes for Studying Evolution and Taxonomy of Fusarium Fungi
Authors: Alexander A. Stakheev, Larisa V. Samokhvalova, Sergey K. Zavriev
Abstract:
Fusarium fungi are among the most devastating plant pathogens distributed all over the world. Significant reduction of grain yield and quality caused by Fusarium leads to multi-billion dollar annual losses to the world agricultural production. These organisms can also cause infections in immunocompromised persons and produce the wide range of mycotoxins, such as trichothecenes, fumonisins, and zearalenone, which are hazardous to human and animal health. Identification of Fusarium fungi based on the morphology of spores and spore-forming structures, colony color and appearance on specific culture media is often very complicated due to the high similarity of these features for closely related species. Modern Fusarium taxonomy increasingly uses data of crossing experiments (biological species concept) and genetic polymorphism analysis (phylogenetic species concept). A number of novel Fusarium sibling species has been established using DNA barcoding techniques. Species recognition is best made with the combined phylogeny of intron-rich protein coding genes and ribosomal DNA sequences. However, the internal transcribed spacer of (ITS), which is considered to be universal DNA barcode for Fungi, is not suitable for genus Fusarium, because of its insufficient variability between closely related species and the presence of non-orthologous copies in the genome. Nowadays, the translation elongation factor 1 alpha (TEF1α) gene is the “gold standard” of Fusarium taxonomy, but the search for novel informative markers is still needed. In this study, we used two novel DNA markers, frataxin (FXN) and heat shock protein 90 (HSP90) to discover phylogenetic relationships between Fusarium species. Multilocus phylogenetic analysis based on partial sequences of TEF1α, FXN, HSP90, as well as intergenic spacer of ribosomal DNA (IGS), beta-tubulin (β-TUB) and phosphate permease (PHO) genes has been conducted for 120 isolates of 19 Fusarium species from different climatic zones of Russia and neighboring countries using maximum likelihood (ML) and maximum parsimony (MP) algorithms. Our analyses revealed that FXN and HSP90 genes could be considered as informative phylogenetic markers, suitable for evolutionary and taxonomic studies of Fusarium genus. It has been shown that PHO gene possesses more variable (22 %) and parsimony informative (19 %) characters than other markers, including TEF1α (12 % and 9 %, correspondingly) when used for elucidating phylogenetic relationships between F. avenaceum and its closest relatives – F. tricinctum, F. acuminatum, F. torulosum. Application of novel DNA barcodes confirmed the fact that F. arthrosporioides do not represent a separate species but only a subspecies of F. avenaceum. Phylogeny based on partial PHO and FXN sequences revealed the presence of separate cluster of four F. avenaceum strains which were closer to F. torulosum than to major F. avenaceum clade. The strain F-846 from Moldova, morphologically identified as F. poae, formed a separate lineage in all the constructed dendrograms, and could potentially be considered as a separate species, but more information is needed to confirm this conclusion. Variable sites in PHO sequences were used for the first-time development of specific qPCR-based diagnostic assays for F. acuminatum and F. torulosum. This work was supported by Russian Foundation for Basic Research (grant № 15-29-02527).Keywords: DNA barcode, fusarium, identification, phylogenetics, taxonomy
Procedia PDF Downloads 32271 Facilitating Primary Care Practitioners to Improve Outcomes for People With Oropharyngeal Dysphagia Living in the Community: An Ongoing Realist Review
Authors: Caroline Smith, Professor Debi Bhattacharya, Sion Scott
Abstract:
Introduction: Oropharyngeal Dysphagia (OD) effects around 15% of older people, however it is often unrecognised and under diagnosed until they are hospitalised. There is a need for primary care healthcare practitioners (HCPs) to assume a proactive role in identifying and managing OD to prevent adverse outcomes such as aspiration pneumonia. Understanding the determinants of primary care HCPs undertaking this new behaviour provides the intervention targets for addressing. This realist review, underpinned by the Theoretical Domains Framework (TDF), aims to synthesise relevant literature and develop programme theories to understand what interventions work, how they work and under what circumstances to facilitate HCPs to prevent harm from OD. Combining realist methodology with behavioural science will permit conceptualisation of intervention components as theoretical behavioural constructs, thus informing the design of a future behaviour change intervention. Furthermore, through the TDF’s linkage to a taxonomy of behaviour change techniques, we will identify corresponding behaviour change techniques to include in this intervention. Methods & analysis: We are following the five steps for undertaking a realist review: 1) clarify the scope 2) Literature search 3) appraise and extract data 4) evidence synthesis 5) evaluation. We have searched Medline, Google scholar, PubMed, EMBASE, CINAHL, AMED, Scopus and PsycINFO databases. We are obtaining additional evidence through grey literature, snowball sampling, lateral searching and consulting the stakeholder group. Literature is being screened, evaluated and synthesised in Excel and Nvivo. We will appraise evidence in relation to its relevance and rigour. Data will be extracted and synthesised according to its relation to Initial programme theories (IPTs). IPTs were constructed after the preliminary literature search, informed by the TDF and with input from a stakeholder group of patient and public involvement advisors, general practitioners, speech and language therapists, geriatricians and pharmacists. We will follow the Realist and Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) quality and publication standards to report study results. Results: In this ongoing review our search has identified 1417 manuscripts with approximately 20% progressing to full text screening. We inductively generated 10 IPTs that hypothesise practitioners require: the knowledge to spot the signs and symptoms of OD; the skills to provide initial advice and support; and access to resources in their working environment to support them conducting these new behaviours. We mapped the 10 IPTs to 8 TDF domains and then generated a further 12 IPTs deductively using domain definitions to fulfil the remaining 6 TDF domains. Deductively generated IPTs broadened our thinking to consider domains such as ‘Emotion,’ ‘Optimism’ and ‘Social Influence’, e.g. If practitioners perceive that patients, carers and relatives expect initial advice and support, then they will be more likely to provide this, because they will feel obligated to do so. After prioritisation with stakeholders using a modified nominal group technique approach, a maximum of 10 IPTs will progress to test against the literature.Keywords: behaviour change, deglutition disorders, primary healthcare, realist review
Procedia PDF Downloads 8570 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 2869 Identifying Effective Strategies to Promote Vietnamese Fashion Brands in an Internationally Dominated Market
Authors: Lam Hong Lan, Gabor Sarlos
Abstract:
It is hard to search for best practices in promotion for local fashion brands in Vietnam as the industry is still very young. Local fashion start-ups have grown quickly in the last five years, thanks in part to the internet and social media. However, local designer/owners can face a huge challenge when competing with international brands in the Vietnamese market – and few local case studies are available for guidance. In response, this paper studied how local small- to medium-sized enterprises (SMEs) promote to their target customers in order to compete with international brands. Knowledge of both successful and unsuccessful approaches generated by this study is intended to both contribute to the academic literature on local fashion in Vietnam as well as to help local designers to learn from and improve their brand-building strategy. The primary study featured qualitative data collection via semi-structured depth interviews. Transcription and data analysis were conducted manually in order to identify success factors that local brands should consider as part of their promotion strategy. Purposive sampling of SMEs identified five designers in Ho Chi Minh City (the biggest city in Vietnam) and three designers in Hanoi (the second biggest) as interviewees. Participant attributes included: born in the 1980s or 1990s; familiar with internet and social media; designer/owner of a successful local fashion brand in the key middle market and/or mass market segments (which are crucial to the growth of local brands). A secondary study was conducted using social listening software to gather further qualitative data on what were considered to be successful or unsuccessful approaches to local fashion brand promotion on social media. Both the primary and secondary studies indicated that local designers had maximized their promotion budget by using owned media and earned media instead of paid media. Findings from the qualitative interviews indicate that internet and social media have been used as effective promotion platforms by local fashion start-ups. Facebook and Instagram were the most popular social networks used by the SMEs interviewed, and these social platforms were believed to offer a more affordable promotional strategy than traditional media such as TV and/or print advertising. Online stores were considered an important factor in helping the SMEs to reach customers beyond the physical store. Furthermore, a successful online store allowed some SMEs to reduce their business rental costs by maintaining their physical store in a cheaper, less central city area as opposed to a more traditional city center store location. In addition, the small comparative size of the SMEs allowed them to be more attentive to their customers, leading to higher customer satisfaction and rate of return. In conclusion, this study found that these kinds of cost savings helped the SMEs interviewed to focus their scarce resources on producing unique, high-quality collections in order to differentiate themselves from international brands. Facebook and Instagram were the main platforms used for promotion and brand-building. The main challenge to this promotion strategy identified by the SMEs interviewed was to continue to find innovative ways to maximize the impact of a limited marketing budget.Keywords: Vietnam, SMEs, fashion brands, promotion, marketing, social listening
Procedia PDF Downloads 12468 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo
Authors: Margaret Boone Rappaport, Christopher J. Corbally
Abstract:
The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.Keywords: genetic drift, genomics, parietal expansion, religious capacity
Procedia PDF Downloads 34167 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan
Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad
Abstract:
Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules
Procedia PDF Downloads 10566 Experimental Characterisation of Composite Panels for Railway Flooring
Authors: F. Pedro, S. Dias, A. Tadeu, J. António, Ó. López, A. Coelho
Abstract:
Railway transportation is considered the most economical and sustainable way to travel. However, future mobility brings important challenges to railway operators. The main target is to develop solutions that stimulate sustainable mobility. The research and innovation goals for this domain are efficient solutions, ensuring an increased level of safety and reliability, improved resource efficiency, high availability of the means (train), and satisfied passengers with the travel comfort level. These requirements are in line with the European Strategic Agenda for the 2020 rail sector, promoted by the European Rail Research Advisory Council (ERRAC). All these aspects involve redesigning current equipment and, in particular, the interior of the carriages. Recent studies have shown that two of the most important requirements for passengers are reasonable ticket prices and comfortable interiors. Passengers tend to use their travel time to rest or to work, so train interiors and their systems need to incorporate features that meet these requirements. Among the various systems that integrate train interiors, the flooring system is one of the systems with the greatest impact on passenger safety and comfort. It is also one of the systems that takes more time to install on the train, and which contributes seriously to the weight (mass) of all interior systems. Additionally, it presents a strong impact on manufacturing costs. The design of railway floor, in the development phase, is usually made relying on a design software that allows to draw and calculate several solutions in a short period of time. After obtaining the best solution, considering the goals previously defined, experimental data is always necessary and required. This experimental phase has such great significance, that its outcome can provoke the revision of the designed solution. This paper presents the methodology and some of the results of an experimental characterisation of composite panels for railway application. The mechanical tests were made for unaged specimens and for specimens that suffered some type of aging, i.e. heat, cold and humidity cycles or freezing/thawing cycles. These conditionings aim to simulate not only the time effect, but also the impact of severe environmental conditions. Both full solutions and separated components/materials were tested. For the full solution, (panel) these were: four-point bending tests, tensile shear strength, tensile strength perpendicular to the plane, determination of the spreading of water, and impact tests. For individual characterisation of the components, more specifically for the covering, the following tests were made: determination of the tensile stress-strain properties, determination of flexibility, determination of tear strength, peel test, tensile shear strength test, adhesion resistance test and dimensional stability. The main conclusions were that experimental characterisation brings a huge contribution to understand the behaviour of the materials both individually and assembled. This knowledge contributes to the increase the quality and improvements of premium solutions. This research work was framed within the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through the COMPETE 2020.Keywords: durability, experimental characterization, mechanical tests, railway flooring system
Procedia PDF Downloads 15465 Clinico-pathological Study of Xeroderma Pigmentosa: A Case Series of Eight Cases
Authors: Kakali Roy, Sahana P. Raju, Subhra Dhar, Sandipan Dhar
Abstract:
Introduction: Xeroderma pigmentosa (XP) is a rare inherited (autosomal recessive) disease resulting from impairment in DNA repair that involves recognition and repair of ultraviolet radiation (UVR) induced DNA damage in the nucleotide excision repair pathway. Which results in increased photosensitivity, UVR induced damage to skin and eye, increased susceptibility of skin and ocular cancer, and progressive neurodegeneration in some patients. XP is present worldwide, with higher incidence in areas having frequent consanguinity. Being extremely rare, there is limited literature on XP and associated complications. Here, the clinico-pathological experience (spectrum of clinical presentation, histopathological findings of malignant skin lesions, and progression) of managing 8 cases of XP is presented. Methodology: A retrospective study was conducted in a pediatric tertiary care hospital in eastern India during a ten-year period from 2013 to 2022. A clinical diagnosis was made based on severe sun burn or premature photo-aging and/or onset of cutaneous malignancies at early age (1st decade) in background of consanguinity and autosomal recessive inheritance pattern in family. Results: The mean age of presentation was 1.2 years (range of 7month-3years), while three children presented during their infancy. Male to female ratio was 5:3, and all were born of consanguineous marriage. They presented with dermatological manifestations (100%) followed by ophthalmic (75%) and/or neurological symptoms (25%). Patients had normal skin at birth but soon developed extreme sensitivity to UVR in the form of exaggerated sun tanning, burning, and blistering on minimal sun exposure, followed by abnormal skin pigmentation like freckles and lentiginosis. Subsequently, over time there was progressive xerosis, atrophy, wrinkling, and poikiloderma. Six patients had varied degree of ocular involvement, while three of them had severe manifestation, including madarosis, tylosis, ectropion, Lagopthalmos, Pthysis bulbi, clouding and scarring of the cornea with complete or partial loss of vision, and ophthalmic malignancies. 50% (n=4) cases had skin and ocular pre-malignant (actinic keratosis) and malignant lesions, including melanoma and non melanoma skin cancer (NMSC) like squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) in their early childhood. One patient had simultaneous occurrence of multiple malignancies together (SCC, BCC, and melanoma). Subnormal intelligence was noticed as neurological feature, and none had sensory neural hearing loss, microcephaly, neuroregression, or neurdeficit. All the patients had been being managed by a multidisciplinary team of pediatricians, dermatologists, ophthalmologists, neurologists and psychiatrists. Conclusion: Although till date there is no complete cure for XP and the disease is ultimately fatal. But increased awareness, early diagnosis followed by persistent vigorous protection from UVR, and regular screening for early detection of malignancies along with psychological support can drastically improve patients’ quality of life and life expectancy. Further research is required on formulating optimal management of XP, specifically the role and possibilities of gene therapy in XP.Keywords: childhood malignancies, dermato-pathological findings, eastern India, Xeroderma pigmentosa
Procedia PDF Downloads 7564 Older Consumer’s Willingness to Trust Social Media Advertising: A Case of Australian Social Media Users
Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant
Abstract:
Social media networks have become the hotbed for advertising activities due mainly to their increasing consumer/user base and, secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional media, such as broadcast media and print media, and, more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilized as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: (1) Gen Z/Millennials Reliability = 4.90/7 vs. Gen X/Boomers Reliability = 4.34/7; (2) Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and (3) Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioral intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users in an attempt to foster positive behavioral responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.Keywords: social media advertising, trust, older consumers, internet studies
Procedia PDF Downloads 3663 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management
Authors: M. Shahab Uddin, Pennung Warnitchai
Abstract:
Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system
Procedia PDF Downloads 22662 Advocating for Indigenous Music in Latin American Music Education
Authors: Francisco Luis Reyes
Abstract:
European colonization had a profound impact on Latin America. The influence of the old continent can be perceived in the culture, religion, and language of the region as well as the beliefs and attitudes of the population. Music education is not an exception to this phenomenon. With Europeans controlling cultural life and erecting educational institutions across the continent for several centuries, Western European Art Music (WEAM) has polarized music learning in formal spaces. In contrast, the musics from the indigenous population, the African slaves, and the ones that emerged as a result of the cultural mélanges have largely been excluded from primary and secondary schooling. The purpose of this paper is to suggest the inclusion of indigenous music education in primary and secondary music education. The paper employs a philosophical inquiry in order to achieve this aim. Philosophical inquiry seeks to uncover and examine individuals' unconscious beliefs, principles, values, and assumptions to envision potential possibilities. This involves identifying and describing issues within current music teaching and learning practices. High-quality philosophical research tackles problems that are sufficiently narrow (addressing a specific aspect of a single complex topic), realistic (reflecting the experiences of music education), and significant (addressing a widespread and timely issue). Consequently, this methodological approach fits this topic, as the research addresses the omnipresence of WEAM in Latin American music education, the exclusion of indigenous music, and argues about the transformational impact said artistic expressions can have on practices in the region. The paper initially addresses how WEAM became ubiquitous in the region by recounting historical events, and adressing the issues other types of music face entering higher education. According to Shifres and Rosabal-Coto (2017) Latin America still upholds the musical heritage of their colonial period, and its formal music education institutions promote the European ontology instilled during European expansion. In accordance, the work of Reyes and Lorenzo-Quiles (2024), and Soler, Lorenzo-Quiles, and Hargreaves (2014), demonstrate how music institutions in the region uphold foreign narratives. Their studies show that music programs in Puerto Rico and Colombia instruct students in WEAM as well as require skills in said art form to enter the profession, just like other authors have argued (Cain & Walden, 2019, Walden, 2016). Subsequently, the research explains the issues faced by prospective music educators that do not practice WEAM. Roberts (1991a, 1991b, 1993), Green (2012) have found that music education students that do not adhere to the musical culture of their institution, are less likely to finish their degrees. Hence, practicioners of tradional musics might feel out of place in the environment. The ubiquity of WEAM and the exclusion of traditional musics of the region, provide the primary challenges to the inclusion of indigenous musics in formal spaces in primary and secondary education. The presentation then laids the framework for the inclusion indigenous music, and conclusively offers examples of how the musical expressions from the continent can improove the music education practices of the region. As an ending, the article highlights the benefits of these musics that are lacking in current practices.Keywords: indigenous music education, postmodern music education, decolonization in music education, music education practice, Latin American music education
Procedia PDF Downloads 3361 Influence of Thermal Annealing on Phase Composition and Structure of Quartz-Sericite Minerale
Authors: Atabaev I. G., Fayziev Sh. A., Irmatova Sh. K.
Abstract:
Raw materials with high content of Kalium oxide widely used in ceramic technology for prevention or decreasing of deformation of ceramic goods during drying process and under thermal annealing. Becouse to low melting temperature it is also used to decreasing of the temperature of thermal annealing during fabrication of ceramic goods [1,2]. So called “Porceline or China stones” - quartz-sericite (muscovite) minerals is also can be used for prevention of deformation as the content of Kalium oxide in muscovite is rather high (SiO2, + KAl2[AlSi3O10](OH)2). [3] . To estimation of possibility of use of this mineral for ceramic manufacture, in the presented article the influence of thermal processing on phase and a chemical content of this raw material is investigated. As well as to other ceramic raw materials (kaoline, white burning clays) the basic requirements of the industry to quality of "a porcelain stone» are following: small size of particles, relative high uniformity of disrtribution of components and phase, white color after burning, small content of colorant oxides or chromophores (Fe2O3, FeO, TiO2, etc) [4,5]. In the presented work natural minerale from the Boynaksay deposit (Uzbekistan) is investigated. The samples was mechanically polished for investigation by Scanning Electron Microscope. Powder with size of particle up to 63 μm was used to X-ray diffractometry and chemical analysis. The annealing of samples was performed at 900, 1120, 1350oC during 1 hour. Chemical composition of Boynaksay raw material according to chemical analysis presented in the table 1. For comparison the composition of raw materials from Russia and USA are also presented. In the Boynaksay quartz – sericite the average parity of quartz and sericite makes 55-60 and 30-35 % accordingly. The distribution of quartz and sericite phases in raw material was investigated using electron probe scanning electronic microscope «JEOL» JXA-8800R. In the figure 1 the scanning electron microscope (SEM) micrograps of the surface and the distributions of Al, Si and K atoms in the sample are presented. As it seen small granular, white and dense mineral includes quartz, sericite and small content of impurity minerals. Basically, crystals of quartz have the sizes from 80 up to 500 μm. Between quartz crystals the sericite inclusions having a tablet form with radiant structure are located. The size of sericite crystals is ~ 40-250 μm. Using data on interplanar distance [6,7] and ASTM Powder X-ray Diffraction Data it is shown that natural «a porcelain stone» quartz – sericite consists the quartz SiO2, sericite (muscovite type) KAl2[AlSi3O10](OH)2 and kaolinite Al203SiO22Н2О (See Figure 2 and Table 2). As it seen in the figure 3 and table 3a after annealing at 900oC the quartz – sericite contains quartz – SiO2 and muscovite - KAl2[AlSi3O10](OH)2, the peaks related with Kaolinite are absent. After annealing at 1120oC the full disintegration of muscovite and formation of mullite phase Al203 SiO2 is observed (the weak peaks of mullite appears in fig 3b and table 3b). After annealing at 1350oC the samples contains crystal phase of quartz and mullite (figure 3c and table 3с). Well known Mullite gives to ceramics high density, abrasive and chemical stability. Thus the obtained experimental data on formation of various phases during thermal annealing can be used for development of fabrication technology of advanced materials. Conclusion: The influence of thermal annealing in the interval 900-1350oC on phase composition and structure of quartz-sericite minerale is investigated. It is shown that during annealing the phase content of raw material is changed. After annealing at 1350oC the samples contains crystal phase of quartz and mullite (which gives gives to ceramics high density, abrasive and chemical stability).Keywords: quartz-sericite, kaolinite, mullite, thermal processing
Procedia PDF Downloads 41260 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India
Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony
Abstract:
The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns
Procedia PDF Downloads 20859 Fe Modified Tin Oxide Thin Film Based Matrix for Reagentless Uric Acid Biosensing
Authors: Kashima Arora, Monika Tomar, Vinay Gupta
Abstract:
Biosensors have found potential applications ranging from environmental testing and biowarfare agent detection to clinical testing, health care, and cell analysis. This is driven in part by the desire to decrease the cost of health care and to obtain precise information more quickly about the health status of patient by the development of various biosensors, which has become increasingly prevalent in clinical testing and point of care testing for a wide range of biological elements. Uric acid is an important byproduct in human body and a number of pathological disorders are related to its high concentration in human body. In past few years, rapid growth in the development of new materials and improvements in sensing techniques have led to the evolution of advanced biosensors. In this context, metal oxide thin film based matrices due to their bio compatible nature, strong adsorption ability, high isoelectric point (IEP) and abundance in nature have become the materials of choice for recent technological advances in biotechnology. In the past few years, wide band-gap metal oxide semiconductors including ZnO, SnO₂ and CeO₂ have gained much attention as a matrix for immobilization of various biomolecules. Tin oxide (SnO₂), wide band gap semiconductor (Eg =3.87 eV), despite having multifunctional properties for broad range of applications including transparent electronics, gas sensors, acoustic devices, UV photodetectors, etc., it has not been explored much for biosensing purpose. To realize a high performance miniaturized biomolecular electronic device, rf sputtering technique is considered to be the most promising for the reproducible growth of good quality thin films, controlled surface morphology and desired film crystallization with improved electron transfer property. Recently, iron oxide and its composites have been widely used as matrix for biosensing application which exploits the electron communication feature of Fe, for the detection of various analytes using urea, hemoglobin, glucose, phenol, L-lactate, H₂O₂, etc. However, to the authors’ knowledge, no work is being reported on modifying the electronic properties of SnO₂ by implanting with suitable metal (Fe) to induce the redox couple in it and utilizing it for reagentless detection of uric acid. In present study, Fe implanted SnO₂ based matrix has been utilized for reagentless uric acid biosensor. Implantation of Fe into SnO₂ matrix is confirmed by energy-dispersive X-Ray spectroscopy (EDX) analysis. Electrochemical techniques have been used to study the response characteristics of Fe modified SnO₂ matrix before and after uricase immobilization. The developed uric acid biosensor exhibits a high sensitivity to about 0.21 mA/mM and a linear variation in current response over concentration range from 0.05 to 1.0 mM of uric acid besides high shelf life (~20 weeks). The Michaelis-Menten kinetic parameter (Km) is found to be relatively very low (0.23 mM), which indicates high affinity of the fabricated bioelectrode towards uric acid (analyte). Also, the presence of other interferents present in human serum has negligible effect on the performance of biosensor. Hence, obtained results highlight the importance of implanted Fe:SnO₂ thin film as an attractive matrix for realization of reagentless biosensors towards uric acid.Keywords: Fe implanted tin oxide, reagentless uric acid biosensor, rf sputtering, thin film
Procedia PDF Downloads 18058 Evaluating Viability of Using South African Forestry Process Biomass Waste Mixtures as an Alternative Pyrolysis Feedstock in the Production of Bio Oil
Authors: Thembelihle Portia Lubisi, Malusi Ntandoyenkosi Mkhize, Jonas Kalebe Johakimu
Abstract:
Fertilizers play an important role in maintaining the productivity and quality of plants. Inorganic fertilizers (containing nitrogen, phosphorus, and potassium) are largely used in South Africa as they are considered inexpensive and highly productive. When applied, a portion of the excess fertilizer will be retained in the soil, a portion enters water streams due to surface runoff or the irrigation system adopted. Excess nutrient from the fertilizers entering the water stream eventually results harmful algal blooms (HABs) in freshwater systems, which not only disrupt wildlife but can also produce toxins harmful to humans. Use of agro-chemicals such as pesticides and herbicides has been associated with increased antimicrobial resistance (AMR) in humans as the plants are consumed by humans. This resistance of bacterial poses a threat as it prevents the Health sector from being able to treat infectious disease. Archaeological studies have found that pyrolysis liquids were already used in the time of the Neanderthal as a biocide and plant protection product. Pyrolysis is thermal degradation process of plant biomass or organic material under anaerobic conditions leading to production of char, bio-oils and syn gases. Bio-oil constituents can be categorized as water soluble (wood vinegar) and water insoluble fractions (tar and light oils). Wood vinegar (pyro-ligneous acid) is said to contain contains highly oxygenated compounds including acids, alcohols, aldehydes, ketones, phenols, esters, furans, and other multifunctional compounds with various molecular weights and compositions depending on the biomass material derived from and pyrolysis operating conditions. Various researchers have found the wood vinegar to be efficient in the eradication of termites, effective in plant protection and plant growth, has antibacterial characteristics and was found effective in inhibiting the micro-organisms such as candida yeast, E-coli, etc. This study investigated characterisation of South African forestry product processing waste with intention of evaluating the potential of using the respective biomass waste as feedstock for boil oil production via pyrolysis process. Ability to use biomass waste materials in production of wood-vinegar has advantages that it does not only allows for reduction of environmental pollution and landfill requirement, but it also does not negatively affect food security. The biomass wastes investigated were from the popular tree types in KZN, which are, pine saw dust (PSD), pine bark (PB), eucalyptus saw dust (ESD) and eucalyptus bark (EB). Furthermore, the research investigates the possibility of mixing the different wastes with an aim to lessen the cost of raw material separation prior to feeding into pyrolysis process and mixing also increases the amount of biomass material available for beneficiation. A 50/50 mixture of PSD and ESD (EPSD) and mixture containing pine saw dust; eucalyptus saw dust, pine bark and eucalyptus bark (EPSDB). Characterisation of the biomass waste will look at analysis such as proximate (volatiles, ash, fixed carbon), ultimate (carbon, hydrogen, nitrogen, oxygen, sulphur), high heating value, structural (cellulose, hemicellulose and lignin) and thermogravimetric analysis.Keywords: characterisation, biomass waste, saw dust, wood waste
Procedia PDF Downloads 6657 Shakespeare's Hamlet in Ballet: Transformation of an Archival Recording of a Neoclassical Ballet Performance into a Contemporary Transmodern Dance Video Applying Postmodern Concepts and Techniques
Authors: Svebor Secak
Abstract:
This four-year artistic research project hosted by the University of New England, Australia has set the goal to experiment with non-conventional ways of presenting a language-based narrative in dance using insights of recent theoretical writing on performance, addressing the research question: How to transform an archival recording of a neoclassical ballet performance into a new artistic dance video by implementing postmodern philosophical concepts? The Creative Practice component takes the form of a dance video Hamlet Revisited which is a reworking of the archival recording of the neoclassical ballet Hamlet, augmented by new material, produced using resources, technicians and dancers of the Croatian National Theatre in Zagreb. The methodology for the creation of Hamlet Revisited consisted of extensive field and desk research after which three dancers were shown the recording of original Hamlet and then created their artistic response to it based on their reception and appreciation of it. The dancers responded differently, based upon their diverse dancing backgrounds and life experiences. They began in the role of the audience observing video of the original ballet and transformed into the role of the choreographer-performer. Their newly recorded material was edited and juxtaposed with the archival recording of Hamlet and other relevant footage, allowing for postmodern features such as aleatoric content, synchronicity, eclecticism and serendipity, that way establishing communication on a receptive reader-response basis, thus blending the roles of the choreographer, performer and spectator, creating an original work of art whose significance lies in the relationship and communication between styles, old and new choreographic approaches, artists and audiences and the transformation of their traditional roles and relationships. In editing and collating, the following techniques were used with the intention to avoid the singular narrative: fragmentation, repetition, reverse-motion, multiplication of images, split screen, overlaying X-rays, image scratching, slow-motion, freeze-frame and simultaneity. Key postmodern concepts considered were: deconstruction, diffuse authorship, supplementation, simulacrum, self-reflexivity, questioning the role of the author, intertextuality and incredulity toward grand narratives - departing from the original story, thus personalising its ontological themes. From a broad brush of diverse concepts and techniques applied in an almost prescriptive manner, the project focuses on intertextuality that proves to be valid on at least two levels. The first is the possibility of a more objective analysis in combination with a semiotic structuralist approach moving from strict relationships between signs to a multiplication of signifiers, considering the dance text as an open construction, containing the elusive and enigmatic quality of art that leaves the interpretive position open. The second one is the creation of the new work where the author functions as the editor, aware and conscious of the interplay of disparate texts and their sources which co-act in the mind during the creative process. It is argued here that the eclectic combination of the old and new material through constant oscillations of different discourses upon the same topic resulted in a transmodern integrationist recent work of art that might be applied as a model for reconsidering existing choreographic creations.Keywords: Ballet Hamlet, intertextuality, transformation, transmodern dance video
Procedia PDF Downloads 25756 Production of Insulin Analogue SCI-57 by Transient Expression in Nicotiana benthamiana
Authors: Adriana Muñoz-Talavera, Ana Rosa Rincón-Sánchez, Abraham Escobedo-Moratilla, María Cristina Islas-Carbajal, Miguel Ángel Gómez-Lim
Abstract:
The highest rates of diabetes incidence and prevalence worldwide will increase the number of diabetic patients requiring insulin or insulin analogues. Then, current production systems would not be sufficient to meet the future market demands. Therefore, developing efficient expression systems for insulin and insulin analogues are needed. In addition, insulin analogues with better pharmacokinetics and pharmacodynamics properties and without mitogenic potential will be required. SCI-57 (single chain insulin-57) is an insulin analogue having 10 times greater affinity to the insulin receptor, higher resistance to thermal degradation than insulin, native mitogenicity and biological effect. Plants as expression platforms have been used to produce recombinant proteins because of their advantages such as cost-effectiveness, posttranslational modifications, absence of human pathogens and high quality. Immunoglobulin production with a yield of 50% has been achieved by transient expression in Nicotiana benthamiana (Nb). The aim of this study is to produce SCI-57 by transient expression in Nb. Methodology: DNA sequence encoding SCI-57 was cloned in pICH31070. This construction was introduced into Agrobacterium tumefaciens by electroporation. The resulting strain was used to infiltrate leaves of Nb. In order to isolate SCI-57, leaves from transformed plants were incubated 3 hours with the extraction buffer therefore filtrated to remove solid material. The resultant protein solution was subjected to anion exchange chromatography on an FPLC system and ultrafiltration to purify SCI-57. Detection of SCI-57 was made by electrophoresis pattern (SDS-PAGE). Protein band was digested with trypsin and the peptides were analyzed by Liquid chromatography tandem-mass spectrometry (LC-MS/MS). A purified protein sample (20µM) was analyzed by ESI-Q-TOF-MS to obtain the ionization pattern and the exact molecular weight determination. Chromatography pattern and impurities detection were performed using RP-HPLC using recombinant insulin as standard. The identity of the SCI-57 was confirmed by anti-insulin ELISA. The total soluble protein concentration was quantified by Bradford assay. Results: The expression cassette was verified by restriction mapping (5393 bp fragment). The SDS-PAGE of crude leaf extract (CLE) of transformed plants, revealed a protein of about 6.4 kDa, non-present in CLE of untransformed plants. The LC-MS/MS results displayed one peptide with a high score that matches SCI-57 amino acid sequence in the sample, confirming the identity of SCI-57. From the purified SCI-57 sample (PSCI-57) the most intense charge state was 1069 m/z (+6) on the displayed ionization pattern corresponding to the molecular weight of SCI-57 (6412.6554 Da). The RP-HPLC of the PSCI-57 shows the presence of a peak with similar retention time (rt) and UV spectroscopic profile to the insulin standard (SCI-57 rt=12.96 and insulin rt=12.70 min). The collected SCI-57 peak had ELISA signal. The total protein amount in CLE from transformed plants was higher compared to untransformed plants. Conclusions: Our results suggest the feasibility to produce insulin analogue SCI-57 by transient expression in Nicotiana benthamiana. Further work is being undertaken to evaluate the biological activity by glucose uptake by insulin-sensitive and insulin-resistant murine and human cultured adipocytes.Keywords: insulin analogue, mass spectrometry, Nicotiana benthamiana, transient expression
Procedia PDF Downloads 34855 Evaluation of Antimicrobial Properties of Lactic Acid Bacteria of Enterococcus Genus
Authors: Kristina Karapetyan, Flora Tkhruni, Tsovinar Balabekyan, Arevik Israyelyan, Tatyana Khachatryan
Abstract:
The ability of the lactic acid bacteria (LAB) to prevent and cure a variety of diseases, their protective role against infections and colonization of pathogenic microorganisms in the digestive tract, has lead to the coining of the term probiotics or pro-life. LAB inhibiting the growth of pathogenic and food spoilage microorganisms, maintaining the nutritive quality and improving the shelf life of foods. They have also been used as flavor and texture producers. Enterococcus strains have been used for treatment of diseases such as diarrhea or antibiotic associated diarrhea, inflammatory pathologies that affect colon such as irritable bowel syndrome, or immune regulation, diarrhea caused by antibiotic treatments. The obtaining and investigation of biological properties of proteinoceous antibiotics, on the basis of probiotic LAB shown, that bacteriocins, metabiotics, and peptides of LAB represent bactericides have a broad range of activity and are excellent candidates for development of new prophylactic and therapeutic substances to complement or replace conventional antibiotics. The genotyping by 16S rRNA sequencing for LAB were used. Cell free culture broth (CFC) broth was purified by the Gel filtration method on the Sephadex Superfine G 25 resin. Antimicrobial activity was determined by spot-on-lawn method and expressed in arbitrary units (AU/ml). The diversity of multidrug-resistance (MDR) of pathogenic strains to antibiotics, most widely used for treatment of human diseases in the Republics of Armenia and Nagorno Karabakh were examined. It was shown, that difference of resistance of pathogens to antibiotics depends on their isolation sources. The influences of partially purified antimicrobial preparations (AMP), obtained from the different strains of Enterococcus genus on the growth of MDR pathogenic bacteria were investigated. It was shown, that bacteriocin containing partially purified preparations, obtained from different strains of Enterococcus faecium and durans species, possess bactericidal or bacteriostatic activity against antibiotic resistant intestinal, spoilage and food-borne pathogens such as Listeria monocytogenes, Staphylococcus aureus, E. coli, and Salmonella. Endemic strains of LAB, isolated from Matsoni made from donkey, buffalo and goat milk, shown broad spectrum of activity against food spoiling microorganisms, moulds and fungi, such as Salmonella sp., Esherichia coli, Aspergillus and Penicillium species. Highest activity against MDR pathogens shown bacteria, isolated from goat milk products. High stability of the investigated strains of the genus Enerococcus, isolated from samples of matsun from different regions of Nagorno-Karabakh (NKR) to the antibiotics was shown. The obtained data show high stability of the investigated different strains of the genus Enerococcus. The high genetic diversity in Enterococcus group suggests adaptations for specific mutations in different environments. Thus, endemic strains of LAB are able to produce bacteriocins with high and different inhibitory activity against broad spectrum of microorganisms isolated from different sources and belong to different taxonomic group. Prospect of the use of certain antimicrobial preparations against pathogenic strains is obvious. These AMP can be applied for long term use against different etiology antibiotic resistant pathogens for prevention or treatment of infectional diseases as an alternative to antibiotics.Keywords: antimicrobial biopreparation, endemic lactic acid bacteria, intra-species diversity, multidrug resistance of pathogens
Procedia PDF Downloads 30954 Clinical Efficacy of Localized Salvage Prostate Cancer Reirradiation with Proton Scanning Beam Therapy
Authors: Charles Shang, Salina Ramirez, Stephen Shang, Maria Estrada, Timothy R. Williams
Abstract:
Purpose: Over the past decade, proton therapy utilizing pencil beam scanning has emerged as a preferred treatment modality in radiation oncology, particularly for prostate cancer. This retrospective study aims to assess the clinical and radiobiological efficacy of proton scanning beam therapy in the treatment of localized salvage prostate cancer, following initial radiation therapy with a different modality. Despite the previously delivered high radiation doses, this investigation explores the potential of proton reirradiation in controlling recurrent prostate cancer and detrimental quality of life side effects. Methods and Materials: A retrospective analysis was conducted on 45 cases of locally recurrent prostate cancer that underwent salvage proton reirradiation. Patients were followed for 24.6 ± 13.1 months post-treatment. These patients had experienced an average remission of 8.5 ± 7.9 years after definitive radiotherapy for localized prostate cancer (n=41) or post-prostatectomy (n=4), followed by rising PSA levels. Recurrent disease was confirmed by FDG-PET (n=31), PSMA-PET (n=10), or positive local biopsy (n=4). Gross tumor volume (GTV) was delineated based on PET and MR imaging, with the planning target volume (PTV) expanding to an average of 10.9 cm³. Patients received proton reirradiation using two oblique coplanar beams, delivering total doses ranging from 30.06 to 60.00 GyE in 17–30 fractions. All treatments were administered using the ProBeam Compact system with CT image guidance. The International Prostate Symptom Scores (IPSS) and prostate-specific antigen (PSA) levels were evaluated to assess treatment-related toxicity and tumor control. Results and Discussions: In this cohort (mean age: 76.7 ± 7.3 years), 60% (27/45) of patients showed sustained reductions in PSA levels post-treatment, while 36% (16/45) experienced a PSA decline of more than 0.8 ng/mL. Additionally, 73% (33/45) of patients exhibited an initial PSA reduction, though some showed later PSA increases, indicating the potential presence of undetected metastatic lesions. The median post-retreatment IPSS score was 4, significantly lower than scores reported in other treatment studies. Overall, 69% of patients reported mild urinary symptoms, with 96% (43/45) experiencing mild to moderate symptoms. Three patients experienced grade I or II proctitis, while one patient reported grade III proctitis. These findings suggest that regional organs, including the urethra, bladder, and rectum, demonstrate significant radiobiological recovery from prior radiation exposure, enabling tolerance to additional proton scanning beam therapy. Conclusions: This retrospective analysis of 45 patients with recurrent localized prostate cancer treated with salvage proton reirradiation demonstrates favorable outcomes, with a median follow-up of two years. The post-retreatment IPSS scores were comparable to those reported in follow-up studies of initial radiation therapy treatments, indicating stable or improved urinary symptoms compared to the end of initial treatment. These results highlight the efficacy of proton scanning beam therapy in providing effective salvage treatment while minimizing adverse effects on critical organs. The findings also enhance the understanding of radiobiological responses to reirradiation and support proton therapy as a viable option for patients with recurrent localized prostate cancer following previous definitive radiation therapy.Keywords: prostate salvage radiotherapy, proton therapy, biological radiation tolerance, radiobiology of organs
Procedia PDF Downloads 1553 Disabled Graduate Students’ Experiences and Vision of Change for Higher Education: A Participatory Action Research Study
Authors: Emily Simone Doffing, Danielle Kohfeldt
Abstract:
Disabled students are underrepresented in graduate-level degree enrollment and completion. There is limited research on disabled students' progression during the pandemic. Disabled graduate students (DGS) face unique interpersonal and institutional barriers, yet, limited research explores these barriers, buffering facilitators, and aids to academic persistence. This study adopts an asset-based, embodied disability approach using the critical pedagogy theoretical framework instead of the deficit research approach. The Participatory Action Research (PAR) paradigm, the critical pedagogy theoretical framework, and emancipatory disability research share the same purpose -creating a socially just world through reciprocal learning. This study is one of few, if not the first, to center solely on DGS’ lived understanding using a Participatory Action Research (PAR) epistemology. With a PAR paradigm, participants and investigators work as a research team democratically at every stage of the research process. PAR has individual and systemic outcomes. PAR lessens the researcher-participant power gap and elevates a marginalized community’s knowledge as expertise for local change. PAR and critical pedagogy work toward enriching everyone involved with empowerment, civic engagement, knowledge proliferation, socio-cultural reflection, skills development, and active meaning-making. The PAR process unveils the tensions between disability and graduate school in policy and practice during the pandemic. Likewise, institutional and ideological tensions influence the PAR process. This project is recruiting 10 DGS until September through purposive and snowball sampling. DGS will collectively practice praxis during four monthly focus groups in the fall 2023 semester. Participant researchers can attend a focus group or an interview, both with field notes. September will be our orientation and first monthly meeting. It will include access needs check-ins, ice breakers, consent form review, a group agreement, PAR introduction, research ethics discussion, research goals, and potential research topics. October and November will be available for meetings for dialogues about lived experiences during our collaborative data collection. Our sessions can be semi-structured with “framing questions,” which would be revised together. Field notes include observations that cannot be captured through audio. December will focus on local social action planning and dissemination. Finally, in January, there will be a post-study focus group for students' reflections on their experiences of PAR. Iterative analysis methods include transcribed audio, reflexivity, memos, thematic coding, analytic triangulation, and member checking. This research follows qualitative rigor and quality criteria: credibility, transferability, confirmability, and psychopolitical validity. Results include potential tension points, social action, individual outcomes, and recommendations for conducting PAR. Tension points have three components: dubious practices, contestable knowledge, and conflict. The dissemination of PAR recommendations will aid and encourage researchers to conduct future PAR projects with the disabled community. Identified stakeholders will be informed of DGS’ insider knowledge to drive social sustainability.Keywords: participatory action research, graduate school, disability, higher education
Procedia PDF Downloads 6052 Comparative Analysis of Learner-centred Education in Early Childhood Curriculum Policies in England and Hong Kong
Authors: Dongdong Bai
Abstract:
The curriculum is essential in determining the quality of early childhood education (ECE). Education policy is intricately linked to the effective execution of the preschool education curriculum. The learner-centred education (LCE) approach is a globally common educational concept. However, it is an approach that is applied variably in ECE policy-making and implementation across diverse cultural contexts. Notwithstanding its significance, limited study has investigated the ECE curriculum policies on the articulation and implementation of the LCE concept in England and Hong Kong’s non-profit-making kindergartens — two regions with intricate historical and cultural connections. Moreover, both regions have experienced significant transformations in ECE policy since 1997. This research employs a qualitative comparative approach, with discourse analysis of key policy documents and relevant literature as the primary methodology. The study develops a comparison framework grounded in Adamson and Morris' curriculum comparison theory, which evaluates curricula from the perspectives of purpose, focus, and manifestation. The paper is structured around three key elements: (1) educational objectives; (2) implementation guidance, including pedagogical strategies, learning content and assessment mechanism; and (3) influential cultural ideologies. Through this framework, the study explores the similarities and differences in the design and implementation of LCE within ECE policies in England and Hong Kong’s non-profit-making kindergartens, while examining the cultural factors that shape these policy variations. The findings indicate that both England and Hong Kong possess child-centered educational objectives focused on enhancing cognitive, skill-based, and physical development; however, Hong Kong's policies notably emphasize alleviating academic pressure in achieving these curriculum aims. England's recommendations advocate for play-based, and exploratory learning to augment children's cognitive development. Conversely, Hong Kong utilizes narrative techniques and indoor instruction to facilitate progressive education. Additionally, both areas encompass cognitive disciplines such as literacy and numeracy; however, England distinctly prioritizes citizenship education with an emphasis on cultural traits. In contrast, Hong Kong amalgamates Western educational ideas with an emphasis on traditional Chinese culture and values, encompassing the study of Chinese characters, etiquette, and moral education rooted in Confucian cultural ideologies. Ultimately, regarding assessment mechanisms, England has transitioned from government-led professional evaluation programs to a hybrid of market and governmental oversight. Conversely, Hong Kong's curriculum evaluation mechanism primarily consists of self-evaluation and public supervision, yet it is evident that the policy could benefit from greater receptiveness to public and expert input. The underlying cultural ideologies significantly influence these policy discrepancies. In England, ECE policies are guided by core concepts that viewing children as individuals, agents, and future citizens. In Hong Kong, the policies reflect Confucian traditions and cultural values, which shape their unique approach to ECE in Hong Kong societies. In conclusion, whereas both locations strive to advocate LCE for the comprehensive development of children, significant differences arise in curriculum focus and implementation policies, shaped by their respective cultural philosophies.Keywords: curriculum policy, cultural contexts, early childhood education, learner-centred education
Procedia PDF Downloads 1551 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research
Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova
Abstract:
The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research
Procedia PDF Downloads 391