Search results for: input mode
100 Rapid Sexual and Reproductive Health Pathways for Women Accessing Drug and Alcohol Treatment
Authors: Molly Parker
Abstract:
Unintended pregnancy rates in Australia are amongst the highest in the developed world. Women with Substance Use Disorder often have riskier sexual behavior with nil contraceptive use and face disproportionately higher unintended pregnancies and Sexually Transmitted Infections, alongside Substance Use in Pregnancy (SUP) climbing at an alarming rate. In an inner-city Drug and Alcohol (D&A) service, significant barriers to sexual and reproductive health services have been identified, aligning with research. Rapid pathways were created for women seeking D&A treatment to be referred to Sexual and Reproductive Health services for the administration of Long-acting reversible contraception (LARC) and sexual health screening. For clients attending a D&A service, this is an opportunistic time to offer sexual and reproductive health services. Collaboration and multidisciplinary team input between D&A and sexual health and reproductive services are paramount, with rapid referral pathways being identified as the main strategy to improve access to sexual and reproductive health support for this population. With this evidence, a rapid referral pathway was created for women using the D&A service to access LARC, particularly in view of fertility often returning once stable on D&A treatment. A closed-ended survey was used for D&A staff to identify gaps in reproductive health knowledge and views of referral accessibility. Results demonstrated a lack of knowledge of contraception and appropriate referral processes. A closed-ended survey for clients was created to establish the need and access to services and to quantify data. A follow-up data collection will be reviewed to access uptake and satisfaction of the intervention from clients. Sexual health screening access was also identified as a deficit, particularly concerning due to the higher rates of STIs in this cohort. A rapid referral pathway will be undergoing implementation, reducing risks of untreated STIS both pre and post-conception. Similarly, pre and post-intervention structured surveys will be used to identify client satisfaction from the pathway. Although currently in progress, the research and pathway aim to be completed by December 2023. This research and implementation of sexual and reproductive health pathways from the D&A service have significant health and well-being benefits to clients and the wider community, including possible fetal/infancy outcomes. Women now have rapid access to sexual and reproductive health services, with the aim of reducing unplanned pregnancies, poor outcomes associated with SUP, client/staff trauma from termination of pregnancy, and client/staff trauma following the assumption of care of the child due to substance use, the financial cost for out of home care as required, the poor outcomes of untreated STIs to the fetus in pregnancy and the spread of STIs in the wider community. As evidence suggests, the implementation of a streamlined referral process is required between D&A and sexual and reproductive health services and has positive feedback from both clinicians and clients in improving care.Keywords: substance use in pregnancy, drug and alcohol, substance use disorder, sexual health, reproductive health, contraception, long-acting reversible contraception, neonatal abstinence syndrome, FASD, sexually transmitted infections, sexually transmitted infections pregnancy
Procedia PDF Downloads 6599 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass
Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski
Abstract:
The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals
Procedia PDF Downloads 43598 The Interactive Wearable Toy "+Me", for the Therapy of Children with Autism Spectrum Disorders: Preliminary Results
Authors: Beste Ozcan, Valerio Sperati, Laura Romano, Tania Moretta, Simone Scaffaro, Noemi Faedda, Federica Giovannone, Carla Sogos, Vincenzo Guidetti, Gianluca Baldassarre
Abstract:
+me is an experimental interactive toy with the appearance of a soft, pillow-like, panda. Shape and consistency are designed to arise emotional attachment in young children: a child can wear it around his/her neck and treat it as a companion (i.e. a transitional object). When caressed on paws or head, the panda emits appealing, interesting outputs like colored lights or amusing sounds, thanks to embedded electronics. Such sensory patterns can be modified through a wirelessly connected tablet: by this, an adult caregiver can adapt +me responses to a child's reactions or requests, for example, changing the light hue or the type of sound. The toy control is therefore shared, as it depends on both the child (who handles the panda) and the adult (who manages the tablet and mediates the sensory input-output contingencies). These features make +me a potential tool for therapy with children with Neurodevelopmental Disorders (ND), characterized by impairments in the social area, like Autism Spectrum Disorders (ASD) and Language Disorders (LD): as a proposal, the toy could be used together with a therapist, in rehabilitative play activities aimed at encouraging simple social interactions and reinforcing basic relational and communication skills. +me was tested in two pilot experiments, the first one involving 15 Typically Developed (TD) children aged in 8-34 months, the second one involving 7 children with ASD, and 7 with LD, aged in 30-48 months. In both studies a researcher/caregiver, during a one-to-one, ten-minute activity plays with the panda and encourages the child to do the same. The purpose of both studies was to ascertain the general acceptability of the device as an interesting toy that is an object able to capture the child's attention and to maintain a high motivation to interact with it and with the adult. Behavioral indexes for estimating the interplay between the child, +me and caregiver were rated from the video recording of the experimental sessions. Preliminary results show how -on average- participants from 3 groups exhibit a good engagement: they touch, caress, explore the panda and show enjoyment when they manage to trigger luminous and sound responses. During the experiments, children tend to imitate the caregiver's actions on +me, often looking (and smiling) at him/her. Interesting behavioral differences between TD, ASD, and LD groups are scored: for example, ASD participants produce a fewer number of smiles both to panda and to a caregiver with respect to TD group, while LD scores stand between ASD and TD subjects. These preliminary observations suggest that the interactive toy +me is able to raise and maintain the interest of toddlers and therefore it can be reasonably used as a supporting tool during therapy, to stimulate pivotal social skills as imitation, turn-taking, eye contact, and social smiles. Interestingly, the young age of participants, along with the behavioral differences between groups, seem to suggest a further potential use of the device: a tool for early differential diagnosis (the average age of a childKeywords: autism spectrum disorders, interactive toy, social interaction, therapy, transitional wearable companion
Procedia PDF Downloads 12397 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning
Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar
Abstract:
Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence
Procedia PDF Downloads 7896 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices
Authors: Kaustav Mukherjee
Abstract:
In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parametersKeywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss
Procedia PDF Downloads 13295 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines
Authors: Kamyar Tolouei, Ehsan Moosavi
Abstract:
In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization
Procedia PDF Downloads 10594 Benzenepropanamine Analogues as Non-detergent Microbicidal Spermicide for Effective Pre-exposure Prophylaxis
Authors: Veenu Bala, Yashpal S. Chhonker, Bhavana Kushwaha, Rabi S. Bhatta, Gopal Gupta, Vishnu L. Sharma
Abstract:
According to UNAIDS 2013 estimate nearly 52% of all individuals living with HIV are now women of reproductive age (15–44 years). Seventy-five percent cases of HIV acquisition are through heterosexual contacts and sexually transmitted infections (STIs), attributable to unsafe sexual behaviour. Each year, an estimated 500 million people acquire atleast one of four STIs: chlamydia, gonorrhoea, syphilis and trichomoniasis. Trichomonas vaginalis (TV) is exclusively sexually transmitted in adults, accounting for 30% of STI cases and associated with pelvic inflammatory disease (PID), vaginitis and pregnancy complications in women. TV infection resulted in impaired vaginal milieu, eventually favoring HIV transmission. In the absence of an effective prophylactic HIV vaccine, prevention of new infections has become a priority. It was thought worthwhile to integrate HIV prevention and reproductive health services including unintended pregnancy protection for women as both are related with unprotected sex. Initially, nonoxynol-9 (N-9) had been proposed as a spermicidal agent with microbicidal activity but on the contrary it increased HIV susceptibility due to surfactant action. Thus, to accomplish an urgent need of novel woman controlled non-detergent microbicidal spermicides benzenepropanamine analogues have been synthesized. At first, five benzenepropanamine-dithiocarbamate hybrids have been synthesized and evaluated for their spermicidal, anti-Trichomonas and anti-fungal activities along with safety profiling to cervicovaginal cells. In order to further enhance the scope of above study benzenepropanamine was hybridized with thiourea as to introduce anti-HIV potential. The synthesized hybrid molecules were evaluated for their reverse transcriptase (RT) inhibition, spermicidal, anti-Trichomonas and antimicrobial activities as well as their safety against vaginal flora and cervical cells. simulated vaginal fluid (SVF) stability and pharmacokinetics of most potent compound versus N-9 was examined in female Newzealand (NZ) rabbits to observe its absorption into systemic circulation and subsequent exposure in blood plasma through vaginal wall. The study resulted in the most promising compound N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) exhibiting better activity profile than N-9 as it showed RT inhibition (72.30 %), anti-Trichomonas (MIC, 46.72 µM against MTZ susceptible and MIC, 187.68 µM against resistant strain), spermicidal (MEC, 0.01%) and antifungal activity (MIC, 3.12–50 µg/mL) against four fungal strains. The high safety against vaginal epithelium (HeLa cells) and compatibility with vaginal flora (lactobacillus), SVF stability and least vaginal absorption supported its suitability for topical vaginal application. Docking study was performed to gain an insight into the binding mode and interactions of the most promising compound, N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) with HIV-1 Reverse Transcriptase. The docking study has revealed that compound (29) interacted with HIV-1 RT similar to standard drug Nevirapine. It may be concluded that hybridization of benzenepropanamine and thiourea moiety resulted into novel lead with multiple activities including RT inhibition. A further lead optimization may result into effective vaginal microbicides having spermicidal, anti-Trichomonas, antifungal and anti-HIV potential altogether with enhanced safety to cervico-vaginal cells in comparison to Nonoxynol-9.Keywords: microbicidal, nonoxynol-9, reverse transcriptase, spermicide
Procedia PDF Downloads 34493 Angiopermissive Foamed and Fibrillar Scaffolds for Vascular Graft Applications
Authors: Deon Bezuidenhout
Abstract:
Pre-seeding with autologous endothelial cells improves the long-term patency of synthetic vascular grafts levels obtained with autografts, but is limited to a single centre due to resource, time and other constraints. Spontaneous in vivo endothelialization would obviate the need for pre-seeding, but has been shown to be absent in man due to limited transanastomotic and fallout healing, and the lack of transmural ingrowth due to insufficient porosity. Two types of graft scaffolds with increased interconnected porosity for improved tissue ingrowth and healing are thus proposed and described. Foam-type polyurethane (PU) scaffolds with small, medium and large, interconnected pores were made by phase inversion and spherical porogen extraction, with and without additional surface modification with covalently attached heparin and subsequent loading with and delivery of growth factors. Fibrillar scaffolds were made either by standard electrospinning using degradable PU (Degrapol®), or by dual electrospinning using non-degradable PU. The latter process involves sacrificial fibres that are co-spun with structural fibres and subsequently removed to increased porosity and pore size. Degrapol samples were subjected to in vitro degradation, and all scaffold types were evaluated in vivo for tissue ingrowth and vascularization using rat subcutaneous model. The foam scaffolds were additionally evaluated in a circulatory (rat infrarenal aortic interposition) model that allows for the grafts to be anastomotically and/or ablumenally isolated to discern and determine endothelialization mode. Foam-type grafts with large (150 µm) pores showed improved subcutaneous healing in terms of vascularization and inflammatory response over smaller pore sizes (60 and 90µm), and vascularization of the large porosity scaffolds was significantly increased by more than 70% by heparin modification alone, and by 150% to 400% when combined with growth factors. In the circulatory model, extensive transmural endothelialization (95±10% at 12 w) was achieved. Fallout healing was shown to be sporadic and limited in groups that were ablumenally isolated to prevent transmural ingrowth (16±30% wrapped vs. 80±20% control; p<0.002). Heparinization and GF delivery improved both mural vascularization and lumenal endothelialization. Degrapol electrospun scaffolds showed decrease in molecular mass and corresponding tensile strength over the first 2 weeks, but very little decrease in mass over the 4w test period. Studies on the effect of tissue ingrowth with and without concomitant degradation of the scaffolds, are being used to develop material models for the finite element modelling. In the case of the dual-spun scaffolds, the PU fibre fraction could be controlled shown to vary linearly with porosity (P = −0.18FF +93.5, r2=0.91), which in turn showed inverse linear correlation with tensile strength and elastic modulus (r2 > 0.96). Calculated compliance and burst pressures of the scaffolds increased with fibre fraction, and compliances matching the human popliteal artery (5-10 %/100 mmHg), and high burst pressures (> 2000 mmHg) could be achieved. Increasing porosity (76 to 82 and 90%) resulted in increased tissue ingrowth from 33±7 to 77±20 and 98±1% after 28d. Transmural endothelialization of highly porous foamed grafts is achievable in a circulatory model, and the enhancement of porosity and tissue ingrowth may hold the key the development of spontaneously endothelializing electrospun grafts.Keywords: electrospinning, endothelialization, porosity, scaffold, vascular graft
Procedia PDF Downloads 29692 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry
Authors: Dhanuj M. Gandikota
Abstract:
Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry
Procedia PDF Downloads 10391 Practice Based Approach to the Development of Family Medicine Residents’ Educational Environment
Authors: Lazzat M. Zhamaliyeva, Nurgul A. Abenova, Gauhar S. Dilmagambetova, Ziyash Zh. Tanbetova, Moldir B. Ahmetzhanova, Tatyana P. Ostretcova, Aliya A. Yegemberdiyeva
Abstract:
Introduction: There are many reasons for the weak training of family doctors in Kazakhstan: the unified national educational program is not focused on competencies, the role of a general practitioner (GP) is not clear, poor funding for the health care and education system, outdated teaching and assessment methods, inefficient management. We highlight two issues in particular. Firstly, academic teachers of family medicine (FM) in Kazakhstan do not practice as family doctors; most of them are narrow specialists (pediatricians, therapists, surgeons, etc.); they usually hold one-time consultations; clinical mentors from practical healthcare (non-academic teachers) do not have the teaching competences, and the vast majority of them are also narrow specialists. Secondly, clinical sites (polyclinics) are unprepared for general practice and do not follow the principles of family medicine; residents do not like to be in primary health care (PHC) settings due to the chaos that is happening there, as well as due to the lack of the necessary equipment for mastering and consolidating practical skills. Aim: We present the concept of the family physicians’ training office (FPTO), which is being created as a friendly learning environment for young general practitioners and for the involvement of academic teachers of family medicine in the practical work and innovative development of PHC. Methodology: In developing the conceptual framework and identifying practical activities, we drew on literature and expert input, and interviews. Results: The goal of the FPTO is to create a favorable educational and clinical environment for the development of the FM residents’ competencies, in which the residents with academic teachers and clinical mentors could understand and accept the principles of family medicine, improve clinical knowledge and skills, and gain experience in improving the quality of their practice in scientific basis. Three main areas of office activity are providing primary care to the patients, improving educational services for FM residents and other medical workers, and promoting research in PHC and innovations. The office arranges for residents to see outpatients at least 50% of the time, and teachers of FM departments at least 1/4 of their working time conduct general medical appointments next to residents. Taking into account the educational and scientific workload, the number of attached population for one GP does not exceed 500 persons. The equipment of the office allows FPTO workers to perform invasive and other manipulations without being sent to other clinics. In the office, training for residents is focused on their needs and aimed at achieving the required level of competence. International methodologies and assessment tools are adapted to local conditions and evaluated for their effectiveness and acceptability. Residents and their faculty actively conduct research in the field of family medicine. Conclusions: We propose to change the learning environment in order to create teams of like-minded people, to unite residents and teachers even more for the development of family medicine. The offices will also invest resources in developing and maintaining young doctors' interest in family medicine.Keywords: educational environment, family medicine residents, family physicians’ training office, primary care research
Procedia PDF Downloads 13490 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan
Authors: Hsien-Te Lin
Abstract:
The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy
Procedia PDF Downloads 13989 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 11288 Characterizing the Spatially Distributed Differences in the Operational Performance of Solar Power Plants Considering Input Volatility: Evidence from China
Authors: Bai-Chen Xie, Xian-Peng Chen
Abstract:
China has become the world's largest energy producer and consumer, and its development of renewable energy is of great significance to global energy governance and the fight against climate change. The rapid growth of solar power in China could help achieve its ambitious carbon peak and carbon neutrality targets early. However, the non-technical costs of solar power in China are much higher than at international levels, meaning that inefficiencies are rooted in poor management and improper policy design and that efficiency distortions have become a serious challenge to the sustainable development of the renewable energy industry. Unlike fossil energy generation technologies, the output of solar power is closely related to the volatile solar resource, and the spatial unevenness of solar resource distribution leads to potential efficiency spatial distribution differences. It is necessary to develop an efficiency evaluation method that considers the volatility of solar resources and explores the mechanism of the influence of natural geography and social environment on the spatially varying characteristics of efficiency distribution to uncover the root causes of managing inefficiencies. The study sets solar resources as stochastic inputs, introduces a chance-constrained data envelopment analysis model combined with the directional distance function, and measures the solar resource utilization efficiency of 222 solar power plants in representative photovoltaic bases in northwestern China. By the meta-frontier analysis, we measured the characteristics of different power plant clusters and compared the differences among groups, discussed the mechanism of environmental factors influencing inefficiencies, and performed statistical tests through the system generalized method of moments. Rational localization of power plants is a systematic project that requires careful consideration of the full utilization of solar resources, low transmission costs, and power consumption guarantee. Suitable temperature, precipitation, and wind speed can improve the working performance of photovoltaic modules, reasonable terrain inclination can reduce land cost, and the proximity to cities strongly guarantees the consumption of electricity. The density of electricity demand and high-tech industries is more important than resource abundance because they trigger the clustering of power plants to result in a good demonstration and competitive effect. To ensure renewable energy consumption, increased support for rural grids and encouraging direct trading between generators and neighboring users will provide solutions. The study will provide proposals for improving the full life-cycle operational activities of solar power plants in China to reduce high non-technical costs and improve competitiveness against fossil energy sources.Keywords: solar power plants, environmental factors, data envelopment analysis, efficiency evaluation
Procedia PDF Downloads 9187 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 17686 Professional Working Conditions, Mental Health And Mobility In The Hungarian Social Sector Preliminary Findings From A Multi-method Study
Authors: Ágnes Győri, Éva Perpék, Zsófia Bauer, Zsuzsanna Elek
Abstract:
The aim of the research (funded by Hungarian national grant, NFKI- FK 138315) is to examine the professional mobility, mental health and work environment of social workers with a complex approach. Previous international and Hungarian research has pointed out that those working in the helping professions are strongly exposed to the risk of emotional-mental-physical exhaustion due to stress. Mental and physical strain, as well as lack of coping (can) cause health problems, but its role in career change and high labor turnover has also been proven. Even though satisfaction with working conditions of those employed in the human service sector in the context of the stress burden has been researched extensively, there is a lack of large-sample international and Hungarian domestic studies exploring the effects of profession-specific conditions. Nor has it been examined how the specific features of the social profession and mental health affect the career mobility of the professionals concerned. In our research, these factors and their correlations are analyzed by means of mixed methodology, utilizing the benefits of netnographic big data analysis and a sector-specific quantitative survey. The netnographic analysis of open web content generated inside and outside the social profession offers a holistic overview of the influencing factors related to mental health and the work environment of social workers. On the one hand, the topics and topoi emerging in the external discourse concerning the sector are examined, and on the other hand, focus on mentions and streams of comments regarding the profession, burnout, stress, coping, as well as labor turnover and career changes among social professionals. The analysis focuses on new trends and changes in discourse that have emerged during and after the pandemic. In addition to the online conversation analysis, a survey of social professionals with a specific focus has been conducted. The questionnaire is based on input from the first two research phases. The applied approach underlines that the mobility paths of social professionals can only be understood if, apart from the general working conditions, the specific features of social work and the effects of certain aspects of mental health (emotional-mental-physical strain, resilience) are taken into account as well. In this paper, the preliminary results from this innovative methodological mix are presented, with the aim of highlighting new opportunities and dimensions in the research on social work. A gap in existing research is aimed to be filled both on a methodological and empirical level, and the Hungarian domestic findings can create a feasible and relevant framework for a further international investigation and cross-cultural comparative analysis. Said results can contribute to the foundation of organizational and policy-level interventions, targeted programs whereby the risk of burnout and the rate of career abandonment can be reduced. Exploring different aspects of resilience and mapping personality strengths can be a starting point for stress-management, motivation-building, and personality-development training for social professionals.Keywords: burnout, mixed methods, netnography, professional mobility, social work
Procedia PDF Downloads 14385 Magnesium Nanoparticles for Photothermal Therapy
Authors: E. Locatelli, I. Monaco, R. C. Martin, Y. Li, R. Pini, M. Chiariello, M. Comes Franchini
Abstract:
Despite the many advantages of application of nanomaterials in the field of nanomedicine, increasing concerns have been expressed on their potential adverse effects on human health. There is urgency for novel green strategies toward novel materials with enhanced biocompatibility using safe reagents. Photothermal ablation therapy, which exploits localized heat increase of a few degrees to kill cancer cells, has appeared recently as a non-invasive and highly efficient therapy against various cancer types; anyway new agents able to generate hyperthermia when irradiated are needed and must have precise biocompatibility in order to avoid damage to healthy tissues and prevent toxicity. Recently, there has been increasing interest in magnesium as a biomaterial: it is the fourth most abundant cation in the human body, and it is essential for human metabolism. However magnesium nanoparticles (Mg NPs) have had limited diffusion due to the high reduction potential of magnesium cations, which makes NPs synthesis challenging. Herein, we report the synthesis of Mg NPs and their surface functionalization for the obtainment of a stable and biocompatible nanomaterial suitable for photothermal ablation therapy against cancer. We synthesized the Mg crystals by reducing MgCl2 with metallic lithium and exploiting naphthalene as an electron carrier: the lithium–naphthalene complex acts as the real reducing agent. Firstly, the nanocrystal particles were coated with the ligand 12-ethoxy ester dodecanehydroxamic acid, and then entrapped into water-dispersible polymeric micelles (PMs) made of the FDA-approved PLGA-b-PEG-COOH copolymer using the oil-in-water emulsion technique. Lately, we developed a more straightforward methodology by introducing chitosan, a highly biocompatible natural product, at the beginning of the process, simultaneously using lithium–naphthalene complex, thus having a one-pot procedure for the formation and surface modification of MgNPs. The obtained MgNPs were purified and fully characterized, showing diameters in the range of 50-300 nm. Notably, when coated with chitosan the particles remained stable as dry powder for more than 10 months. We proved the possibility of generating a temperature rise of a few to several degrees once MgNPs were illuminated using a 810 nm diode laser operating in continuous wave mode: the temperature rise resulted significant (0-15 °C) and concentration dependent. We then investigated potential cytotoxicity of the MgNPs: we used HN13 epithelial cells, derived from a head and neck squamous cell carcinoma and the hepa1-6 cell line, derived from hepatocellular carcinoma and very low toxicity was observed for both nanosystems. Finally, in vivo photothermal therapy was performed on xenograft hepa1-6 tumor bearing mice: the animals were treated with MgNPs coated with chitosan and showed no sign of suffering after the injection. After 12 hours the tumor was exposed to near-infrared laser light. The results clearly showed an extensive damage to tumor tissue after only 2 minutes of laser irradiation at 3Wcm-1, while no damage was reported when the tumor was treated with the laser and saline alone in control group. Despite the lower photothermal efficiency of Mg with respect to Au NPs, we consider MgNPs a promising, safe and green candidate for future clinical translations.Keywords: chitosan, magnesium nanoparticles, nanomedicine, photothermal therapy
Procedia PDF Downloads 27084 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material
Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva
Abstract:
Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions
Procedia PDF Downloads 17683 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector
Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation
Procedia PDF Downloads 13882 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size
Authors: Natalie Hoyer
Abstract:
Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests
Procedia PDF Downloads 4581 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 17980 The Potential of Rhizospheric Bacteria for Mycotoxigenic Fungi Suppression
Authors: Vanja Vlajkov, Ivana PajčIn, Mila Grahovac, Marta Loc, Dragana Budakov, Jovana Grahovac
Abstract:
The rhizosphere soil refers to the plant roots' dynamic environment characterized by their inhabitants' high biological activity. Rhizospheric bacteria are recognized as effective biocontrol agents and considered cardinal in alternative strategies for securing ecological plant diseases management. The need to suppress fungal pathogens is an urgent task, not only because of the direct economic losses caused by infection but also due to their ability to produce mycotoxins with harmful effects on human health. Aspergillus and Fusarium species are well-known producers of toxigenic metabolites with a high capacity to colonize crops and enter the food chain. The bacteria belonging to the Bacillus genus has been conceded as a plant beneficial species in agricultural practice and identified as plant growth-promoting rhizobacteria (PGPR). Besides incontestable potential, the full commercialization of microbial biopesticides is in the preliminary phase. Thus, there is a constant need for estimating the suitability of novel strains to be used as a central point of viable bioprocess leading to market-ready product development. In the present study, 76 potential producing strains were isolated from the rhizosphere soil, sampled from different localities in the Autonomous Province of Vojvodina, Republic of Serbia. The selective isolation process of strains started by resuspending 1 g of soil samples in 9 ml of saline and incubating at 28° C for 15 minutes at 150 rpm. After homogenization, thermal treatment at 100° C for 7 minutes was performed. Dilution series (10-1-10-3) were prepared, and 500 µl of each was inoculated on nutrient agar plates and incubated at 28° C for 48 h. The pure cultures of morphologically different strains indicating belonging to the Bacillus genus were obtained by the spread-plate technique. The cultivation of the isolated strains was carried out in an Erlenmeyer flask for 96 h, at 28 °C, 170 rpm. The antagonistic activity screening included two phytopathogenic fungi as test microorganisms: Aspergillus sp. and Fusarium sp. The mycelial growth inhibition was estimated based on the antimicrobial activity testing of cultivation broth by the diffusion method. For the Aspergillus sp., the highest antifungal activity was recorded for the isolates Kro-4a and Mah-1a. In contrast, for the Fusarium sp., following 15 isolates exhibited the highest antagonistic effect Par-1, Par-2, Par-3, Par-4, Kup-4, Paš-1b, Pap-3, Kro-2, Kro-3a, Kro-3b, Kra-1a, Kra-1b, Šar-1, Šar-2b and Šar-4. One-way ANOVA was performed to determine the antagonists' effect statistical significance on inhibition zone diameter. Duncan's multiple range test was conducted to define homogenous groups of antagonists with the same level of statistical significance regarding their effect on antimicrobial activity of the tested cultivation broth against tested pathogens. The study results have pointed out the significant in vitro potential of the isolated strains to be used as biocontrol agents for the suppression of the tested mycotoxigenic fungi. Further research should include the identification and detailed characterization of the most promising isolates and mode of action of the selected strains as biocontrol agents. The following research should also involve bioprocess optimization steps to fully reach the selected strains' potential as microbial biopesticides and design cost-effective biotechnological production.Keywords: Bacillus, biocontrol, bioprocess, mycotoxigenic fungi
Procedia PDF Downloads 19679 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 20078 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 16777 The Socio-Economic Impact of the English Leather Glove Industry from the 17th Century to Its Recent Decline
Authors: Frances Turner
Abstract:
Gloves are significant physical objects, being one of the oldest forms of dress. Glove culture is part of every facet of life; its extraordinary history encompasses practicality, and symbolism reflecting a wide range of social practices. The survival of not only the gloves but associated articles enables the possibility to analyse real lives, however so far this area has been largely neglected. Limited information is available to students, researchers, or those involved with the design and making of gloves. There are several museums and independent collectors in England that hold collections of gloves (some from as early as 16th century), machinery, tools, designs and patterns, marketing materials and significant archives which demonstrate the rich heritage of English glove design and manufacturing, being of national significance and worthy of international interest. Through a research glove network which now exists thanks to research grant funding, there is potential for the holders of glove collections to make connections and explore links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English glove industry. The network takes an interdisciplinary approach to bring together interested parties from academia, museums and manufacturing, with expert knowledge of the production, collections, conservation and display of English leather gloves. Academics from diverse arts and humanities disciplines benefit from the opportunities to share research and discuss ideas with network members from non-academic contexts including museums and heritage organisations, industry, and contemporary designers. The fragmented collections when considered in entirety provide an overview of English glove making since earliest times and those who wore them. This paper makes connections and explores links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English Glove industry. The following areas are explored: current content and status of the individual museum collections, potential links, sharing of information histories, social and cultural and relationship to history of fashion design, manufacturing and materials, approaches to maintenance and conservation, access to the collections and strategies for future understanding of their national significance. The facilitation of knowledge exchange and exploration of the collections through the network informs organisations’ future strategies for the maintenance, access and conservation of their collections. By involving industry in the network, it is possible to ensure a contemporary perspective on glove-making in addition to the input from heritage partners. The slow fashion movement and awareness of artisan craft and how these can be preserved and adopted for glove and accessory design is addressed. Artisan leather glove making was a skilled and significant industry in England that has now declined to the point where there is little production remaining utilising the specialist skills that have hardly changed since earliest times. This heritage will be identified and preserved for future generations of the rich cultural history of gloves may be lost.Keywords: artisan glove-making skills, English leather gloves, glove culture, the glove network
Procedia PDF Downloads 12976 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria
Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List
Abstract:
Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM
Procedia PDF Downloads 31375 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 37774 Developing and integrated Clinical Risk Management Model
Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei
Abstract:
Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.Keywords: failure modes and effective analysis, risk management, root cause analysis, model
Procedia PDF Downloads 24973 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 6472 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach
Authors: John D. Sanderson
Abstract:
The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.Keywords: dubbing, film genre, screen translation, sociolect
Procedia PDF Downloads 17171 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 50