Search results for: scalable automation testing
570 Biological Control of Karnal Bunt by Pseudomonas fluorescens
Authors: Geetika Vajpayee, Sugandha Asthana, Pratibha Kumari, Shanthy Sundaram
Abstract:
Pseudomonas species possess a variety of promising properties of antifungal and growth promoting activities in the wheat plant. In the present study, Pseudomonas fluorescens MTCC-9768 is tested against plant pathogenic fungus Tilletia indica, causing Karnal bunt, a quarantine disease of wheat (Triticum aestivum) affecting kernels of wheat. It is one of the 1/A1 harmful diseases of wheat worldwide under EU legislation. This disease develops in the growth phase by the spreading of microscopically small spores of the fungus (teliospores) being dispersed by the wind. The present chemical fungicidal treatments were reported to reduce teliospores germination, but its effect is questionable since T. indica can survive up to four years in the soil. The fungal growth inhibition tests were performed using Dual Culture Technique, and the results showed inhibition by 82.5%. The interaction of antagonist bacteria-fungus causes changes in the morphology of hyphae, which was observed using Lactophenol cotton blue staining and Scanning Electron Microscopy (SEM). The rounded and swollen ends, called ‘theca’ were observed in interacted fungus as compared to control fungus (without bacterial interaction). This bacterium was tested for its antagonistic activity like protease, cellulose, HCN production, Chitinase, etc. The growth promoting activities showed increase production of IAA in bacteria. The bacterial secondary metabolites were extracted in different solvents for testing its growth inhibiting properties. The characterization and purification of the antifungal compound were done by Thin Layer Chromatography, and Rf value was calculated (Rf value = 0.54) and compared to the standard antifungal compound, 2, 4 DAPG (Rf value = 0.54). Further, the in vivo experiments showed a significant decrease in the severity of disease in the wheat plant due to direct injection method and seed treatment. Our results indicate that the extracted and purified compound from the antagonist bacteria, P. fluorescens MTCC-9768 may be used as a potential biocontrol agent against T. indica. This also concludes that the PGPR properties of the bacteria may be utilized by incorporating it into bio-fertilizers.Keywords: antagonism, Karnal bunt, PGPR, Pseudomonas fluorescens
Procedia PDF Downloads 406569 Electrifying Textile Wastewater Sludge through Up-flow Anaerobic Sludge Blanket Reactor for Sustainable Waste Management
Authors: Tewodros Birhan, Tamrat Tesfaye
Abstract:
Energy supply and waste management are two of humanity's greatest challenges. The world's energy supply primarily relies on fossil fuels, which produce excessive carbon dioxide emissions when burned. When released into the atmosphere in high concentrations, these emissions contribute to global warming. Generating textile wastewater sludge from the Bahir Dar Textile Industry poses significant environmental challenges. This sludge, a byproduct of extensive dyeing and finishing processes, contains a variety of harmful chemicals and heavy metals that can contaminate soil and water resources. This research work explores sustainable waste management strategies, focusing on biogas production from textile wastewater sludge using up-flow anaerobic sludge blanket reactor technology. The objective was to harness biogas, primarily methane, as a renewable energy source while mitigating the environmental impact of textile wastewater disposal. Employing a Central Composite Design approach, experiments were meticulously designed to optimize process parameters. Two key factors, Carbon-to-Nitrogen ratio, and pH, were varied at different levels (20:1 and 25:1 for C: N ratio; 6.8 and 7.6 for pH) to evaluate their influence on methane yield. A 0.4m3 up-flow anaerobic sludge blanket reactor was constructed to facilitate the anaerobic digestion process. Over 26 days, the reactor underwent rigorous testing and monitoring to ascertain its efficiency in biogas production. Meticulous experimentation and data analysis found that the optimal conditions for maximizing methane yield were achieved. Notably, a methane yield of 56.4% was attained, which signifies the effectiveness of the up-flow anaerobic sludge blanket reactor in converting textile wastewater sludge into a valuable energy resource. The findings of this study hold significant implications for both environmental conservation and energy sustainability. Furthermore, the utilization of up-flow anaerobic sludge blanket reactor technology underscores its potential as a viable solution for biogas production from textile wastewater sludge, further promoting the transition towards a circular economy paradigm.Keywords: anaerobic digestion, biogas energy, circular economy, textile sludge, waste-to-energy
Procedia PDF Downloads 15568 Greek Teachers' Understandings of Typical Language Development and of Language Difficulties in Primary School Children and Their Approaches to Language Teaching
Authors: Konstantina Georgali
Abstract:
The present study explores Greek teachers’ understandings of typical language development and of language difficulties. Its core aim was to highlight that teachers need to have a thorough understanding of educational linguistics, that is of how language figures in education. They should also be aware of how language should be taught so as to promote language development for all students while at the same time support the needs of children with language difficulties in an inclusive ethos. The study, thus argued that language can be a dynamic learning mechanism in the minds of all children and a powerful teaching tool in the hands of teachers and provided current research evidence to show that structural and morphological particularities of native languages- in this case, of the Greek language- can be used by teachers to enhance children’s understanding of language and simultaneously improve oral language skills for children with typical language development and for those with language difficulties. The research was based on a Sequential Exploratory Mixed Methods Design deployed in three consecutive and integrative phases. The first phase involved 18 exploratory interviews with teachers. Its findings informed the second phase involving a questionnaire survey with 119 respondents. Contradictory questionnaire results were further investigated in a third phase employing a formal testing procedure with 60 children attending Y1, Y2 and Y3 of primary school (a research group of 30 language impaired children and a comparison group of 30 children with typical language development, both identified by their class teachers). Results showed both strengths and weaknesses in teachers’ awareness of educational linguistics and of language difficulties. They also provided a different perspective of children’s language needs and of language teaching approaches that reflected current advances and conceptualizations of language problems and opened a new window on how best they can be met in an inclusive ethos. However, teachers barely used teaching approaches that could capitalize on the particularities of the Greek language to improve language skills for all students in class. Although they seemed to realize the importance of oral language skills and their knowledge base on language related issues was adequate, their practices indicated that they did not see language as a dynamic teaching and learning mechanism that can promote children’s language development and in tandem, improve academic attainment. Important educational implications arose and clear indications of the generalization of findings beyond the Greek educational context.Keywords: educational linguistics, inclusive ethos, language difficulties, typical language development
Procedia PDF Downloads 383567 Limbic Involvement in Visual Processing
Authors: Deborah Zelinsky
Abstract:
The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing
Procedia PDF Downloads 90566 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams
Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche
Abstract:
According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor
Procedia PDF Downloads 319565 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis
Authors: Jui-Teng Liao
Abstract:
The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format
Procedia PDF Downloads 91564 Assessment of Seeding and Weeding Field Robot Performance
Authors: Victor Bloch, Eerikki Kaila, Reetta Palva
Abstract:
Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.Keywords: agricultural robot, field robot, plant detection, robot performance
Procedia PDF Downloads 87563 Digital Adoption of Sales Support Tools for Farmers: A Technology Organization Environment Framework Analysis
Authors: Sylvie Michel, François Cocula
Abstract:
Digital agriculture is an approach that exploits information and communication technologies. These encompass data acquisition tools like mobile applications, satellites, sensors, connected devices, and smartphones. Additionally, it involves transfer and storage technologies such as 3G/4G coverage, low-bandwidth terrestrial or satellite networks, and cloud-based systems. Furthermore, embedded or remote processing technologies, including drones and robots for process automation, along with high-speed communication networks accessible through supercomputers, are integral components of this approach. While farm-level adoption studies regarding digital agricultural technologies have emerged in recent years, they remain relatively limited in comparison to other agricultural practices. To bridge this gap, this study delves into understanding farmers' intention to adopt digital tools, employing the technology, organization, environment framework. A qualitative research design encompassed semi-structured interviews, totaling fifteen in number, conducted with key stakeholders both prior to and following the 2020-2021 COVID-19 lockdowns in France. Subsequently, the interview transcripts underwent thorough thematic content analysis, and the data and verbatim were triangulated for validation. A coding process aimed to systematically organize the data, ensuring an orderly and structured classification. Our research extends its contribution by delineating sub-dimensions within each primary dimension. A total of nine sub-dimensions were identified, categorized as follows: perceived usefulness for communication, perceived usefulness for productivity, and perceived ease of use constitute the first dimension; technological resources, financial resources, and human capabilities constitute the second dimension, while market pressure, institutional pressure, and the COVID-19 situation constitute the third dimension. Furthermore, this analysis enriches the TOE framework by incorporating entrepreneurial orientation as a moderating variable. Managerial orientation emerges as a pivotal factor influencing adoption intention, with producers acknowledging the significance of utilizing digital sales support tools to combat "greenwashing" and elevate their overall brand image. Specifically, it illustrates that producers recognize the potential of digital tools in time-saving and streamlining sales processes, leading to heightened productivity. Moreover, it highlights that the intent to adopt digital sales support tools is influenced by a market mimicry effect. Additionally, it demonstrates a negative association between the intent to adopt these tools and the pressure exerted by institutional partners. Finally, this research establishes a positive link between the intent to adopt digital sales support tools and economic fluctuations, notably during the COVID-19 pandemic. The adoption of sales support tools in agriculture is a multifaceted challenge encompassing three dimensions and nine sub-dimensions. The research delves into the adoption of digital farming technologies at the farm level through the TOE framework. This analysis provides significant insights beneficial for policymakers, stakeholders, and farmers. These insights are instrumental in making informed decisions to facilitate a successful digital transition in agriculture, effectively addressing sector-specific challenges.Keywords: adoption, digital agriculture, e-commerce, TOE framework
Procedia PDF Downloads 61562 Chinese Early Childhood Parenting Style as a Moderator of the Development of Social Competence Based on Mindreading
Authors: Arkadiusz Gut, Joanna Afek
Abstract:
The first issue that we discuss in this paper is a battery of research demonstrating that culture influences children’s performance in tasks testing their theory of mind, also known as mindreading. We devote special attention to research done within Chinese culture; namely, studies with children speaking Cantonese and Mandarin natively and growing up in an environment dominated by the Chinese model of informal home education. Our attention focuses on the differences in development and functioning of social abilities and competences between children from China and the West. Another matter we turn to is the description of the nature of Chinese early childhood education. We suggest that the differences between the Chinese model and that of the West reveal a set of modifiers responsible for the variation observed in empirical research on children’s theory of mind (mindreading). The modifiers we identify are the following: (1) early socialization – that is, the transformation of the child into a member of the family and society that set special value by the social and physical environment; (2) the Confucian model of education – that is, the Chinese alphabet and tradition that determine a certain way of education in China; (3) the authoritarian style of upbringing – that is, reinforcing conformism, discouraging voicing of private opinions, and respect for elders; (4) the modesty of children and protectiveness of parents – that is, obedience as a desired characteristic in the child, overprotectiveness of parents, especially mothers; and (5) gender differences – that is, different educational styles for girls and boys. In our study, we conduct a thorough meta-analysis of empirical data on the development of mindreading and ToM (children’s theory of mind), as well as a cultural analysis of early childhood education in China. We support our analyses with questionnaire and narrative studies conducted in China that use the ‘Children’s Social Understanding Scale’ questionnaire, conversations based on the so-called ‘Scenarios Presented to Parents’, and questions designed to measure the ‘my child and I’ relation. With our research we aim to identify the factors in early childhood education that serve as moderators explaining the nature of the development and functioning of social cognition based on mind reading in China. Additionally, our study provides a valuable insight for comparative research of social cognition between China and the West.Keywords: early childhood education, China, mindreading, parenting
Procedia PDF Downloads 386561 Formal Development of Electronic Identity Card System Using Event-B
Authors: Tomokazu Nagata, Jawid Ahmad Baktash
Abstract:
The goal of this paper is to explore the use of formal methods for Electronic Identity Card System. Nowadays, one of the core research directions in a constantly growing distributed environment is the improvement of the communication process. The responsibility for proper verification becomes crucial. Formal methods can play an essential role in the development and testing of systems. The thesis presents two different methodologies for assessing correctness. Our first approach employs abstract interpretation techniques for creating a trace based model for Electronic Identity Card System. The model was used for building a semi decidable procedure for verifying the system model. We also developed the code for the eID System and can cover three parts login to system sending of Acknowledgment from user side, receiving of all information from server side and log out from system. The new concepts of impasse and spawned sessions that we introduced led our research to original statements about the intruder’s knowledge and eID system coding with respect to secrecy. Furthermore, we demonstrated that there is a bound on the number of sessions needed for the analysis of System.Electronic identity (eID) cards promise to supply a universal, nation-wide mechanism for user authentication. Most European countries have started to deploy eID for government and private sector applications. Are government-issued electronic ID cards the proper way to authenticate users of online services? We use the eID project as a showcase to discuss eID from an application perspective. The new eID card has interesting design features, it is contact-less, it aims to protect people’s privacy to the extent possible, and it supports cryptographically strong mutual authentication between users and services. Privacy features include support for pseudonymous authentication and per service controlled access to individual data items. The article discusses key concepts, the eID infrastructure, observed and expected problems, and open questions. The core technology seems ready for prime time and government projects deploy it to the masses. But application issues may hamper eID adoption for online applications.Keywords: eID, event-B, Pro-B, formal method, message passing
Procedia PDF Downloads 237560 Inflammatory Changes Caused by Lipopolysaccharide in Odontoblasts
Authors: Virve Pääkkönen, Heidi M. Cuffaro, Leo Tjäderhane
Abstract:
Objectives: Odontoblasts are the outermost cell layer of dental pulp and form the dentin. Importance of bacterial products, e.g. lipoteichoic acid (LTA), a cell wall component of Gram-positive bacteria and lipopolysaccharide (LPS), a cell wall component of Gram-negative bacteria, have been indicated in the pathogenesis of pulpitis. Gram-positive bacteria are more prevalent in superficial carious lesion while the amount gram-negative is higher in the deep lesions. Objective of this study was to investigate the effect of these bacterial products on inflammatory response of pulp tissue. Interleukins (IL) were of special interest. Various ILs have been observed in the dentin-pulp complex of carious tooth in vivo. Methods: Tissue culture method was used for testing the effect of LTA and LPS on human odontoblasts. Enzymatic isolation technique was used to extract living odontoblasts for cell cultures. DNA microarray and quantitative PCR (qPCR) were used to characterize the changes in the expression profile of the tissue cultured odontoblasts. Laser microdissection was used to cut healthy and affected dentin and odontoblast layer directly under carious lesion for experiments. Cytokine array detecting 80 inflammatory cytokines was used to analyze the protein content of conditioned culture media as well as dentin and odontoblasts from the carious teeth. Results: LPS caused increased gene expression IL-1α, and -8 and decrease of IL-1β, 12 , -15 and -16 after 1h treatment, while after 24h treatment decrease of IL-8, -11 and 23 mRNAs was observed. LTA treatment caused cell death in the tissue cultured odontoblasts but in in the cell culture but not in cell culture. Cytokine array revealed at least 2-fold down-regulation of IL-1β, -10 and -12 in response to LPS treatment. Cytokine array of odontoblasts of carious teeth, as well as LPS-treated tissue-cultured odontoblasts, revealed increased protein amounts of IL-16, epidermal growth factor (EGF), angiogenin and IGFBP-1 as well as decreased amount of fractalkine. In carious dentin, increased amount of IL-1β, EGF and fractalkine was observed, as well as decreased level of GRO-1 and HGF. Conclusion: LPS caused marked changes in the expression of inflammatory cytokines in odontoblasts. Similar changes were observed in the odontoblasts cut directly under the carious lesion. These results help to shed light on the inflammatory processes happening during caries.Keywords: inflammation, interleukin, lipoteichoic acid, odontoblasts
Procedia PDF Downloads 211559 Non Destructive Ultrasound Testing for the Determination of Elastic Characteristics of AlSi7Zn3Cu2Mg Foundry Alloy
Authors: A. Hakem, Y. Bouafia
Abstract:
Characterization of materials used for various mechanical components is of great importance in their design. Several studies were conducted by various authors in order to improve their physical and/or chemical properties in general and mechanical or metallurgical properties in particular. The foundry alloy AlSi7Zn3Cu2Mg is one of the main components constituting the various mechanisms for the implementation of applications and various industrial projects. Obtaining a reliable product is not an easy task; several results proposed by different authors show sometimes results that can contradictory. Due to their high mechanical characteristics, these alloys are widely used in engineering. Silicon improves casting properties and magnesium allows heat treatment. It is thus possible to obtain various degrees of hardening and therefore interesting compromise between tensile strength and yield strength, on one hand, and elongation, on the other hand. These mechanical characteristics can be further enhanced by a series of mechanical treatments or heat treatments. Their light weight coupled with high mechanical characteristics, aluminum alloys are very much used in cars and aircraft industry. The present study is focused on the influence of heat treatments which cause significant micro structural changes, usually hardening by variation of annealing temperatures by increments of 10°C and 20°C on the evolution of the main elastic characteristics, the resistance, the ductility and the structural characteristics of AlSi7Zn3Cu2Mg foundry alloy cast in sand by gravity. These elastic properties are determined in three directions for each specimen of dimensions 200x150x20 mm³ by the ultrasonic method based on acoustic or elastic waves. The hardness, the micro hardness and the structural characteristics are evaluated by a non-destructive method. The aim of this work is to study the hardening ability of AlSi7Zn3Cu2Mg alloy by considering ten states. To improve the mechanical properties obtained with the raw casting, one should use heat treatment for structural hardening; the addition of magnesium is necessary to increase the sensitivity to this specific heat treatment: Treatment followed by homogenization which generates a diffusion of atoms in a substitution solid solution inside a hardening furnace at 500°C during 8h, followed immediately by quenching in water at room temperature 20 to 25°C, then an ageing process for 17h at room temperature and at different annealing temperature (150, 160, 170, 180, 190, 240, 200, 220 and 240°C) for 20h in an annealing oven. The specimens were allowed to cool inside the oven.Keywords: aluminum, foundry alloy, magnesium, mechanical characteristics, silicon
Procedia PDF Downloads 264558 Establishing a Communication Framework in Response to the COVID-19 Pandemic in a Tertiary Government Hospital in the Philippines
Authors: Nicole Marella G. Tan, Al Joseph R. Molina, Raisa Celine R. Rosete, Soraya Elisse E. Escandor, Blythe N. Ke, Veronica Marie E. Ramos, Apolinario Ericson B. Berberabe, Jose Jonas D. del Rosario, Regina Pascua-Berba, Eileen Liesl A. Cubillan, Winlove P. Mojica
Abstract:
Emergency risk and health communications play a vital role in any pandemic response. However, the Philippine General Hospital (PGH) lacked a system of information delivery that could effectively fulfill the hospital’s communication needs as a COVID-19 referral hospital. This study aimed to describe the establishment of a communication framework for information dissemination within a tertiary government hospital during the COVID-19 pandemic and evaluated the perceived usefulness of its outputs. This is a mixed quantitative-qualitative study with two phases. Phase 1 documented the formation and responsibilities of the Information Education Communication (IEC) Committee. Phase 2 evaluated its output and outcomes through a hospital-wide survey of 528 healthcare workers (HCWs) using a pre-tested questionnaire. In-depth explanations were obtained from five focused group discussions (FGD) amongst various HCW subgroups. Descriptive analysis was done using STATA 16 while qualitative data were synthesized thematically. Communication practices in PGH were loosely structured at the beginning of the pandemic until the establishment of the IEC Committee. The IEC Committee was well-represented by concerned stakeholders. Nine types of infographics tackled different aspects of the hospital’s health operations after thorough inputs from concerned offices. Internal and external feedback mechanisms ensured accurate infographics. Majority of the survey respondents (98.67%) perceived these as useful in their work or daily lives. FGD participants cited the relevance of infographics to their occupations, suggested improvements, and hoped that these efforts would be continued in the future. Sustainability and comprehensive reach were the main concerns in this undertaking. The PGH COVID-19 IEC framework was developed through trial and testing as there were no existing formal structures to communicate health risks and to properly direct the HCWs in the chaotic time of a pandemic. It is a continuously evolving framework which is perceived as useful by HCWs and is hoped to be sustained in the future.Keywords: COVID-19, pandemic, health communication, infographics, social media
Procedia PDF Downloads 127557 Software User Experience Enhancement through Collaborative Design
Authors: Shan Wang, Fahad Alhathal, Daniel Hobson
Abstract:
User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023, aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight workshops with a diverse group of 11 individuals. Throughout these sessions, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.Keywords: user experiences, co-design, design process, knowledge management tool, user-centered design
Procedia PDF Downloads 68556 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics
Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu
Abstract:
Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades
Procedia PDF Downloads 100555 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 390554 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports
Authors: Carolyn Hatch, Laurent Simon
Abstract:
Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs
Procedia PDF Downloads 182553 Biomechanical Analysis on Skin and Jejunum of Chemically Prepared Cat Cadavers Used in Surgery Training
Authors: Raphael C. Zero, Thiago A. S. S. Rocha, Marita V. Cardozo, Caio C. C. Santos, Alisson D. S. Fechis, Antonio C. Shimano, FabríCio S. Oliveira
Abstract:
Biomechanical analysis is an important factor in tissue studies. The objective of this study was to determine the feasibility of a new anatomical technique and quantify the changes in skin and the jejunum resistance of cats’ corpses throughout the process. Eight adult cat cadavers were used. For every kilogram of weight, 120ml of fixative solution (95% 96GL ethyl alcohol and 5% pure glycerin) was applied via the external common carotid artery. Next, the carcasses were placed in a container with 96 GL ethyl alcohol for 60 days. After fixing, all carcasses were preserved in a 30% sodium chloride solution for 60 days. Before fixation, control samples were collected from fresh cadavers and after fixation, three skin and jejunum fragments from each cadaver were tested monthly for strength and displacement until complete rupture in a universal testing machine. All results were analyzed by F-test (P <0.05). In the jejunum, the force required to rupture the fresh samples and the samples fixed in alcohol for 60 days was 31.27±19.14N and 29.25±11.69N, respectively. For the samples preserved in the sodium chloride solution for 30 and 60 days, the strength was 26.17±16.18N and 30.57±13.77N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days was 2.79±0.73mm and 2.80±1.13mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 2.53±1.03mm and 2.83±1.27mm, respectively. There was no statistical difference between the samples (P=0.68 with respect to strength, and P=0.75 with respect to displacement). In the skin, the force needed to rupture the fresh samples and the samples fixed for 60 days in alcohol was 223.86±131.5N and 211.86±137.53N respectively. For the samples preserved in sodium chloride solution for 30 and 60 days, the force was 227.73±129.06 and 224.78±143.83N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days were 3.67±1.03mm and 4.11±0.87mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 4.21±0.93mm and 3.93±0.71mm, respectively. There was no statistical difference between the samples (P=0.65 with respect to strength, and P=0.98 with respect to displacement). The resistance of the skin and intestines of the cat carcasses suffered little change when subjected to alcohol fixation and preservation in sodium chloride solution, each for 60 days, which is promising for use in surgery training. All experimental procedures were approved by the Municipal Legal Department (protocol 02.2014.000027-1). The project was funded by FAPESP (protocol 2015-08259-9).Keywords: anatomy, conservation, fixation, small animal
Procedia PDF Downloads 297552 Multicellular Cancer Spheroids as an in Vitro Model for Localized Hyperthermia Study
Authors: Kamila Dus-Szachniewicz, Artur Bednarkiewicz, Katarzyna Gdesz-Birula, Slawomir Drobczynski
Abstract:
In modern oncology hyperthermia (HT) is defined as a controlled tumor heating. HT treatment temperatures range between 40–48 °C and can selectively damage heat-sensitive cancer cells or limit their further growth, usually with minimal injury to healthy tissues. Despite many advantages, conventional whole-body and regional hyperthermia have clinically relevant side effects, including cardiac and vascular disorders. Additionally, the lack of accessibility of deep-seated tumor sites and impaired targeting micrometastases renders HT less effective. It is believed that above disadvantages can significantly overcome by the application of biofunctionalized microparticles, which can specifically target tumor sites and become activated by an external stimulus to provide a sufficient cellular response. In our research, the unique optical tweezers system have enabled capturing the silica microparticles, primary cells and tumor spheroids in highly controllable and reproducible environment to study the impact of localized heat stimulation on normal and pathological cell and within multicellular tumor spheroid. High throughput spheroid model was introduced to better mimic the response to HT treatment on tumors in vivo. Additionally, application of local heating of tumor spheroids was performed in strictly controlled conditions resembling tumor microenvironment (temperature, pH, hypoxia, etc.), in response to localized and nonhomogeneous hyperthermia in the extracellular matrix, which promotes tumor progression and metastatic spread. The lack of precise control over these well- defined parameters in basic research leads to discrepancies in the response of tumor cells to the new treatment strategy in preclinical animal testing. The developed approach enables also sorting out subclasses of cells, which exhibit partial or total resistance to therapy, in order to understand fundamental aspects of the resistance shown by given tumor cells in response to given therapy mode and conditions. This work was funded by the National Science Centre (NCN, Poland) under grant no. UMO-2017/27/B/ST7/01255.Keywords: cancer spheroids, hyperthermia, microparticles, optical tweezers
Procedia PDF Downloads 134551 Protecting the Health of Astronauts: Enhancing Occupational Health Monitoring and Surveillance for Former NASA Astronauts to Understand Long-Term Outcomes of Spaceflight-Related Exposures
Authors: Meredith Rossi, Lesley Lee, Mary Wear, Mary Van Baalen, Bradley Rhodes
Abstract:
The astronaut community is unique, and may be disproportionately exposed to occupational hazards not commonly seen in other communities. The extent to which the demands of the astronaut occupation and exposure to spaceflight-related hazards affect the health of the astronaut population over the life course is not completely known. A better understanding of the individual, population, and mission impacts of astronaut occupational exposures is critical to providing clinical care, targeting occupational surveillance efforts, and planning for future space exploration. The ability to characterize the risk of latent health conditions is a significant component of this understanding. Provision of health screening services to active and former astronauts ensures individual, mission, and community health and safety. Currently, the NASA-Johnson Space Center (JSC) Flight Medicine Clinic (FMC) provides extensive medical monitoring to active astronauts throughout their careers. Upon retirement, astronauts may voluntarily return to the JSC FMC for an annual preventive exam. However, current retiree monitoring includes only selected screening tests, representing an opportunity for augmentation. The potential long-term health effects of spaceflight demand an expanded framework of testing for former astronauts. The need is two-fold: screening tests widely recommended for other aging populations are necessary to rule out conditions resulting from the natural aging process (e.g., colonoscopy, mammography); and expanded monitoring will increase NASA’s ability to better characterize conditions resulting from astronaut occupational exposures. To meet this need, NASA has begun an extensive exploration of the overall approach, cost, and policy implications of expanding the medical monitoring of former NASA astronauts under the Astronaut Occupational Health program. Increasing the breadth of monitoring services will ultimately enrich the existing evidence base of occupational health risks to astronauts. Such an expansion would therefore improve the understanding of the health of the astronaut population as a whole, and the ability to identify, mitigate, and manage such risks in preparation for deep space exploration missions.Keywords: astronaut, long-term health, NASA, occupational health, surveillance
Procedia PDF Downloads 534550 The Relationship between Proximity to Sources of Industrial-Related Outdoor Air Pollution and Children Emergency Department Visits for Asthma in the Census Metropolitan Area of Edmonton, Canada, 2004/2005 to 2009/2010
Authors: Laura A. Rodriguez-Villamizar, Alvaro Osornio-Vargas, Brian H. Rowe, Rhonda J. Rosychuk
Abstract:
Introduction/Objectives: The Census Metropolitan Area of Edmonton (CMAE) has important industrial emissions to the air from the Industrial Heartland Alberta (IHA) at the Northeast and the coal-fired power plants (CFPP) at the West. The objective of the study was to explore the presence of clusters of children asthma ED visits in the areas around the IHA and the CFPP. Methods: Retrospective data on children asthma ED visits was collected at the dissemination area (DA) level for children between 2 and 14 years of age, living in the CMAE between April 1, 2004, and March 31, 2010. We conducted a spatial analysis of disease clusters around putative sources with count (ecological) data using descriptive, hypothesis testing, and multivariable modeling analysis. Results: The mean crude rate of asthma ED visits was 9.3/1,000 children population per year during the study period. Circular spatial scan test for cases and events identified a cluster of children asthma ED visits in the DA where the CFPP are located in the Wabamum area. No clusters were identified around the IHA area. The multivariable models suggest that there is a significant decline in risk for children asthma ED visits as distance increases around the CFPP area this effect is modified at the SE direction with mean angle 125.58 degrees, where the risk increases with distance. In contrast, the regression models for IHA suggest that there is a significant increase in risk for children asthma ED visits as distance increases around the IHA area and this effect is modified at SW direction with mean angle 216.52 degrees, where the risk increases at shorter distances. Conclusions: Different methods for detecting clusters of disease consistently suggested the existence of a cluster of children asthma ED visits around the CFPP but not around the IHA within the CMAE. These results are probably explained by the direction of the air pollutants dispersion caused by the predominant and subdominant wind direction at each point. The use of different approaches to detect clusters of disease is valuable to have a better understanding of the presence, shape, direction and size of clusters of disease around pollution sources.Keywords: air pollution, asthma, disease cluster, industry
Procedia PDF Downloads 283549 The Examination of Cement Effect on Isotropic Sands during Static, Dynamic, Melting and Freezing Cycles
Authors: Mehdi Shekarbeigi
Abstract:
The consolidation of loose substrates as well as substrate layers through promoting stabilizing materials is one of the most commonly used road construction techniques. Cement, lime, and flax, as well as asphalt emulsion, are common materials used for soil stabilization to enhance the soil’s strength and durability properties. Cement could be simply used to stabilize permeable materials such as sand in a relatively short time threshold. In this research, typical Portland cement is selected for the stabilization of isotropic sand; the effect of static and cyclic loading on the behavior of these soils has been examined with various percentages of Portland cement. Thus, firstly, a soil’s general features are investigated, and then static tests, including direct cutting, density and single axis tests, and California Bearing Ratio, are performed on the samples. After that, the dynamic behavior of cement on silica sand with the same grain size is analyzed. These experiments are conducted on cement samples of 3, 6, and 9 of the same rates and ineffective limiting pressures of 0 to 1200 kPa with 200 kPa steps of the face according to American Society for Testing and Materials D 3999 standards. Also, to test the effect of temperature on molds and frost samples, 0, 5, 10, and 20 are carried out during 0, 5, 10, and 20-second periods. Results of the static tests showed that increasing the cement percentage increases the soil density and shear strength. The single-axis compressive strength increase is higher for samples with higher cement content and lower densities. The results also illustrate the relationship between single-axial compressive strength and cement weight parameters. Results of the dynamic experiments indicate that increasing the number of loading cycles and melting and freezing cycles enhances permeability and decreases the applied pressure. According to the results of this research, it could be stated that samples containing 9% cement have the highest amount of shear modulus and, therefore, decrease the permeability of soil. This amount could be considered as the optimal amount. Also, the enhancement of effective limited pressure from 400 to 800kPa increased the shear modulus of the sample by an average of 20 to 30 percent in small strains.Keywords: cement, isotropic sands, static load, three-axis cycle, melting and freezing cycles
Procedia PDF Downloads 77548 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 201547 Advanced Nuclear Measurements and Systems for Facilitating the Implementation of Safeguards and Safeguards-By-Design in SMR, AMR and Microreactor
Authors: Massimo Morichi
Abstract:
Over the last five years, starting in 2019, several nuclear measurement systems have been conceived and realized for specific nuclear safeguards applications, as well as for nuclear security, implementing some innovative technologies and methods for attended and unattended nuclear measurements. Some of those technologies imply the integration of combined gamma and neutron detection systems both for counting and spectroscopic applications that allow the SNM (Special Nuclear Material) verification and quantification through specific simultaneous measurements (gamma and neutron) from standard to high count rate due to high flux irradiation. IAEA has implemented some of these technologies in key international safeguards inspections worldwide, like a Fast Neutron Collar Monitor for fresh fuel verification of U235 mass (used during inspections for material declaration verification) or for unattended measuring systems with a distinct shift register installed in an anti-tampering sealed housing in unattended mode (remote inspection and continuous monitoring) together with an Unattended Multichannel Analyzer for spectroscopy analysis of SNM like canisters. Such developments, realized with integrated mid-resolution scintillators (FWHM: <3,5%) together with organic scintillators such as Stilbene detectors or Liquid sealed scintillators like EJ-309 with great pulse shape discrimination managed by a fast DAQ and with a high level of system integration, are offering in the near term the possibility to enhance further their implementation, reducing the form factor in order to facilitate their implementation in many critical parts of the Nuclear Fuel Operations as well as of the Next generation of Nuclear Reactors. This will facilitate embedding these advanced technical solutions in the next generation of nuclear installations, assuring the implementation of the Safeguards by Design requested by IAEA for all future/novel nuclear installations. This work presents the most recent designs/systems and provides some clear examples of ongoing applications on the Fuel Cycle-Fuel Fabrication as well as for the SMR/AMR and microreactors. Detailed technology testing and validation in different configuration if provided together with some case-studies and operational implications.Keywords: nuclear safeguard, gamma and neutron detection systems, spectroscopy analysis, nuclear fuel cycle, nuclear power reactors, decommissioning dismantling, nuclear security
Procedia PDF Downloads 10546 Effect of Surfactant Concentration on Dissolution of Hydrodynamically Trapped Sparingly Soluble Oil Micro Droplets
Authors: Adil Mustafa, Ahmet Erten, Alper Kiraz, Melikhan Tanyeri
Abstract:
Work presented here is based on a novel experimental technique used to hydrodynamically trap oil microdroplets inside a microfluidic chip at the junction of microchannels known as stagnation point. Hydrodynamic trapping has been recently used to trap and manipulate a number of particles starting from microbeads to DNA and single cells. Benzyl Benzoate (BB) is used as droplet material. The microdroplets are trapped individually at stagnation point and their dissolution was observed. Experiments are performed for two concentrations (10mM or 10µM) of AOT surfactant (Docusate Sodium Salt) and two flow rates for each case. Moreover, experimental data is compared with Zhang-Yang-Mao (ZYM) model which studies dissolution of liquid microdroplets in the presence of a host fluid experiencing extensional creeping flow. Industrial processes like polymer blending systems in which heat or mass transport occurs experience extensional flow and an insight into these phenomena is of significant importance to many industrial processes. The experimental technique exploited here gives an insight into the dissolution of liquid microdroplets under extensional flow regime. The comparison of our experimental results with ZYM model reveals that dissolution of microdroplets at lower surfactant concentration (10µM) fits the ZYM model at saturation concentration (Cs) value reported in literature (Cs = 15×10⁻³Kg\m³) while for higher surfactant concentration (10mM) which is also above the critical micelle concentration (CMC) of surfactant (5mM) the data fits ZYM model at (Cs = 45×10⁻³Kg\m³) which is 3X times the value reported in literature. The difference in Cs value from the literature shows enhancement in dissolution rate of sparingly soluble BB microdroplets at surfactant concentrations higher than CMC. Enhancement in the dissolution of sparingly soluble materials is of great importance in pharmaceutical industry. Enhancement in the dissolution of sparingly soluble drugs is a key research area for drug design industry. The experimental method is also advantageous because it is robust and has no mechanical contact with droplets under study are freely suspended in the fluid as compared existing methods used for testing dissolution of drugs. The experiments also give an insight into CMC measurement for surfactants.Keywords: extensional flow, hydrodynamic trapping, Zhang-Yang-Mao, CMC
Procedia PDF Downloads 346545 Material Response Characterisation of a PolyJet 3D Printed Human Infant Skull
Authors: G. A. Khalid, R. Prabhu, W. Whittington, M. D. Jones
Abstract:
To establish a causal relationship of infant head injury consequences, this present study addresses the necessary challenges of cranial geometry and the physical response complexities of the paediatric head tissues. Herein, we describe a new approach to characterising and understanding infant head impact mechanics by developing printed head models, using high resolution clinical postmortem imaging, to provide the most complete anatomical representation currently available, and biological material response data-matched polypropylene polymers, to replicate the relative mechanical response properties of immature cranial bone, sutures and fontanelles. Additive manufacturing technology was applied to creating a physical polymeric model of a newborn infant skull, using PolyJet printed materials. Infant skull materials responses, were matched by a response characterisation study, utilising uniaxial tensile testing (1 mm min-1 loading rate), to determine: the stiffness, ultimate tensile strength and maximum strain of rigid and rubber additively manufactured acrylates. The results from the mechanical experiments confirm that the polymeric materials RGD835 Vero White Plus (White), representing the frontal and parietal bones; RGD8510- DM Rigid Light Grey25 (Grey), representing the occipital bone; and FLX9870-DM (Black) representing the suture and fontanelles, were found to show a close stiffness -correlation (E) at ambient temperatures. A 3D physical model of infant head was subsequently printed from the matched materials and subsequently validated against results obtained from a series of Post Mortem Human Surrogate (PMHS) tests. A close correlation was demonstrated between the model impact tests and the PMHS. This study, therefore, represents a key step towards applying printed physical models to understanding head injury biomechanics and is useful in the efforts to predict and mitigate head injury consequences in infants, whether accidental or by abuse.Keywords: infant head trauma, infant skull, material response, post mortem human subjects, polyJet printing
Procedia PDF Downloads 141544 Testing Supportive Feedback Strategies in Second/Foreign Language Vocabulary Acquisition between Typically Developing Children and Children with Learning Disabilities
Authors: Panagiota A. Kotsoni, George S. Ypsilandis
Abstract:
Learning an L2 is a demanding process for all students and in particular for those with learning disabilities (LD) who demonstrate an inability to catch up with their classmates’ progress in a given period of time. This area of study, i.e. examining children with learning disabilities in L2 has not (yet) attracted the growing interest that is registered in L1 and thus remains comparatively neglected. It is this scientific field that this study wishes to contribute to. The longitudinal purpose of this study is to locate effective Supportive Feedback Strategies (SFS) and add to the quality of learning in second language vocabulary in both typically developing (TD) and LD children. Specifically, this study aims at investigating and comparing the performance of TD with LD children on two different types of SFSs related to vocabulary short and long-term retention. In this study two different SFSs have been examined to a total of ten (10) unknown vocabulary items. Both strategies provided morphosyntactic clarifications upon new contextualized vocabulary items. The traditional SFS (direct) provided the information only in one hypertext page with a selection on the relevant item. The experimental SFS (engaging) provided the exact same split information in three successive hypertext pages in the form of a hybrid dialogue asking from the subjects to move on to the next page by selecting the relevant link. It was hypothesized that this way the subjects would engage in their own learning process by actively asking for more information which would further lead to their better retention. The participants were fifty-two (52) foreign language learners (33 TD and 19 LD) aged from 9 to 12, attending an English language school at the level of A1 (CEFR). The design of the study followed a typical pre-post-post test procedure after an hour and after a week. The results indicated statistically significant group differences with TD children performing significantly better than the LD group in both short and long-term memory measurements and in both SFSs. As regards the effectiveness of one SFS over another the initial hypothesis was not supported by the evidence as the traditional SFS was more effective compared to the experimental one in both TD and LD children. This difference proved to be statistically significant only in the long-term memory measurement and only in the TD group. It may be concluded that the human brain seems to adapt to different SFS although it shows a small preference when information is provided in a direct manner.Keywords: learning disabilities, memory, second/foreign language acquisition, supportive feedback
Procedia PDF Downloads 284543 Performance Tests of Wood Glues on Different Wood Species Used in Wood Workshops: Morogoro Tanzania
Authors: Japhet N. Mwambusi
Abstract:
High tropical forests deforestation for solid wood furniture industry is among of climate change contributing agents. This pressure indirectly is caused by furniture joints failure due to poor gluing technology based on improper use of different glues to different wood species which lead to low quality and weak wood-glue joints. This study was carried in order to run performance tests of wood glues on different wood species used in wood workshops: Morogoro Tanzania whereby three popular wood species of C. lusitanica, T. glandis and E. maidenii were tested against five glues of Woodfix, Bullbond, Ponal, Fevicol and Coral found in the market. The findings were necessary on developing a guideline for proper glue selection for a particular wood species joining. Random sampling was employed to interview carpenters while conducting a survey on the background of carpenters like their education level and to determine factors that influence their glues choice. Monsanto Tensiometer was used to determine bonding strength of identified wood glues to different wood species in use under British Standard of testing wood shear strength (BS EN 205) procedures. Data obtained from interviewing carpenters were analyzed through Statistical Package of Social Science software (SPSS) to allow the comparison of different data while laboratory data were compiled, related and compared by the use of MS Excel worksheet software as well as Analysis of Variance (ANOVA). Results revealed that among all five wood glues tested in the laboratory to three different wood species, Coral performed much better with the average shear strength 4.18 N/mm2, 3.23 N/mm2 and 5.42 N/mm2 for Cypress, Teak and Eucalyptus respectively. This displays that for a strong joint to be formed to all tree wood species for soft wood and hard wood, Coral has a first priority in use. The developed table of guideline from this research can be useful to carpenters on proper glue selection to a particular wood species so as to meet glue-bond strength. This will secure furniture market as well as reduce pressure to the forests for furniture production because of the strong existing furniture due to their strong joints. Indeed, this can be a good strategy on reducing climate change speed in tropics which result from high deforestation of trees for furniture production.Keywords: climate change, deforestation, gluing technology, joint failure, wood-glue, wood species
Procedia PDF Downloads 241542 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan
Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen
Abstract:
In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.Keywords: automation, integration, value, communication
Procedia PDF Downloads 147541 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 478