Search results for: testing device
114 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa
Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini
Abstract:
Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time
Procedia PDF Downloads 153113 Northern Nigeria Vaccine Direct Delivery System
Authors: Evelyn Castle, Adam Thompson
Abstract:
Background: In 2013, the Kano State Primary Health Care Management Board redesigned its Routine immunization supply chain from diffused pull to direct delivery push. It addressed issues around stockouts and reduced time spent by health facility staff collecting, and reporting on vaccine usage. The health care board sought the help of a 3PL for twice-monthly deliveries from its cold store to 484 facilities across 44 local governments. eHA’s Health Delivery Systems group formed a 3PL to serve 326 of these new facilities in partnership with the State. We focused on designing and implementing a technology system throughout. Basic methodologies: GIS Mapping: - Planning the delivery of vaccines to hundreds of health facilities requires detailed route planning for delivery vehicles. Mapping the road networks across Kano and Bauchi with a custom routing tool provided information for the optimization of deliveries. Reducing the number of kilometers driven each round by 20%, - reducing cost and delivery time. Direct Delivery Information System: - Vaccine Direct Deliveries are facilitated through pre-round planning (driven by health facility database, extensive GIS, and inventory workflow rules), manager and driver control panel customizing delivery routines and reporting, progress dashboard, schedules/routes, packing lists, delivery reports, and driver data collection applications. Move: Last Mile Logistics Management System: - MOVE has improved vaccine supply information management to be timely, accurate and actionable. Provides stock management workflow support, alerts management for cold chain exceptions/stock outs, and on-device analytics for health and supply chain staff. Software was built to be offline-first with user-validated interface and experience. Deployed to hundreds of vaccine storage site the improved information tools helps facilitate the process of system redesign and change management. Findings: - Stock-outs reduced from 90% to 33% - Redesigned current health systems and managing vaccine supply for 68% of Kano’s wards. - Near real time reporting and data availability to track stock. - Paperwork burdens of health staff have been dramatically reduced. - Medicine available when the community needs it. - Consistent vaccination dates for children under one to prevent polio, yellow fever, tetanus. - Higher immunization rates = Lower infection rates. - Hundreds of millions of Naira worth of vaccines successfully transported. - Fortnightly service to 326 facilities in 326 wards across 30 Local Government areas. - 6,031 cumulative deliveries. - Over 3.44 million doses transported. - Minimum travel distance covered in a round of delivery is 2000 kms & maximum of 6297 kms. - 153,409 kms travelled by 6 drivers. - 500 facilities in 326 wards. - Data captured and synchronized for the first time. - Data driven decision making now possible. Conclusion: eHA’s Vaccine Direct delivery has met challenges in Kano and Bauchi State and provided a reliable delivery service of vaccinations that ensure t health facilities can run vaccination clinics for children under one. eHA uses innovative technology that delivers vaccines from Northern Nigerian zonal stores straight to healthcare facilities. Helped healthcare workers spend less time managing supplies and more time delivering care, and will be rolled out nationally across Nigeria.Keywords: direct delivery information system, health delivery system, GIS mapping, Northern Nigeria, vaccines
Procedia PDF Downloads 374112 Imaging Spectrum of Central Nervous System Tuberculosis on Magnetic Resonance Imaging: Correlation with Clinical and Microbiological Results
Authors: Vasundhara Arora, Anupam Jhobta, Suresh Thakur, Sanjiv Sharma
Abstract:
Aims and Objectives: Intracranial tuberculosis (TB) is one of the most devastating manifestations of TB and a challenging public health issue of considerable importance and magnitude world over. This study elaborates on the imaging spectrum of neurotuberculosis on magnetic resonance imaging (MRI) in 29 clinically suspected cases from a tertiary care hospital. Materials and Methods: The prospective hospital based evaluation of MR imaging features of neuro-tuberculosis in 29 clinically suspected cases was carried out in Department of Radio-diagnosis, Indira Gandhi Medical Hospital from July 2017 to August 2018. MR Images were obtained on a 1.5 T Magnetom Avanto machine and were analyzed to identify any abnormal meningeal enhancement or parenchymal lesions. Microbiological and Biochemical CSF analysis was performed in radio-logically suspected cases and the results were compared with the imaging data. Clinical follow up of the patients started on anti-tuberculous treatment was done to evaluate the response to treatment and clinical outcome. Results: Age range of patients in the study was between 1 year to 73 years. The mean age of presentation was 11.5 years. No significant difference in the distribution of cerebral tuberculosis was noted among the two genders. Imaging findings of neuro-tuberculosis obtained were varied and non specific ranging from lepto-meningeal enhancement, cerebritis to space occupying lesions such as tuberculomas and tubercular abscesses. Complications presenting as hydrocephalus (n= 7) and infarcts (n=9) was noted in few of these patients. 29 patients showed radiological suspicion of CNS tuberculosis with meningitis alone observed in 11 cases, tuberculomas alone were observed in 4 cases, meningitis with parenchymal tuberculomas in 11 cases. Tubercular abscess and cerebritis were observed in one case each. Tuberculous arachnoiditis was noted in one patient. Gene expert positivity was obtained in 11 out of 29 radiologically suspected patients; none of the patients showed culture positivity. Meningeal form of the disease alone showed higher positivity rate of gene Xpert (n=5) followed by combination of meningeal and parenchymal forms of disease (n=4). The parenchymal manifestation of disease alone showed least positivity rates (n= 3) with gene xpert testing. All 29 patients were started on anti tubercular treatment based on radiological suspicion of the disease with clinical improvement observed in 27 treated patients. Conclusions: In our study, higher incidence of neuro- tuberculosis was noted in paediatric population with predominance of the meningeal form of the disease. Gene Xpert positivity obtained was low due to paucibacillary nature of cerebrospinal fluid (CSF) with even lower positivity of CSF samples in parenchymal form of the manifestation. MRI showed high accuracy in detecting CNS lesions in neuro-tuberculosis. Hence, it can be concluded that MRI plays a crucial role in the diagnosis because of its inherent sensitivity and specificity and is an indispensible imaging modality. It caters to the need of early diagnosis owing to poor sensitivity of microbiological tests more so in the parenchymal manifestation of the disease.Keywords: neurotuberculosis, tubercular abscess, tuberculoma, tuberculous meningitis
Procedia PDF Downloads 173111 Active Learning through a Game Format: Implementation of a Nutrition Board Game in Diabetes Training for Healthcare Professionals
Authors: Li Jiuen Ong, Magdalin Cheong, Sri Rahayu, Lek Alexander, Pei Ting Tan
Abstract:
Background: Previous programme evaluations from the diabetes training programme conducted in Changi General Hospital revealed that healthcare professionals (HCPs) are keen to receive advance diabetes training and education, specifically in medical, nutritional therapy. HCPs also expressed a preference for interactive activities over didactic teaching methods to enhance their learning. Since the War on Diabetes was initiated by MOH in 2016, HCPs are challenged to be actively involved in continuous education to be better equipped to reduce the growing burden of diabetes. Hence, streamlining training to incorporate an element of fun is of utmost importance. Aim: The nutrition programme incorporates game play using an interactive board game that aims to provide a more conducive and less stressful environment for learning. The board game could be adapted for training of community HCPs, health ambassadors or caregivers to cope with the increasing demand of diabetes care in the hospital and community setting. Methodology: Stages for game’s conception (Jaffe, 2001) were adopted in the development of the interactive board game ‘Sweet Score™ ’ Nutrition concepts and topics in diabetes self-management are embedded into the game elements of varying levels of difficulty (‘Easy,’ ‘Medium,’ ‘Hard’) including activities such as a) Drawing/ sculpting (Pictionary-like) b)Facts/ Knowledge (MCQs/ True or False) Word definition) c) Performing/ Charades To study the effects of game play on knowledge acquisition and perceived experiences, participants were randomised into two groups, i.e., lecture group (control) and game group (intervention), to test the difference. Results: Participants in both groups (control group, n= 14; intervention group, n= 13) attempted a pre and post workshop quiz to assess the effectiveness of knowledge acquisition. The scores were analysed using paired T-test. There was an improvement of quiz scores after attending the game play (mean difference: 4.3, SD: 2.0, P<0.001) and the lecture (mean difference: 3.4, SD: 2.1, P<0.001). However, there was no significance difference in the improvement of quiz scores between gameplay and lecture (mean difference: 0.9, 95%CI: -0.8 to 2.5, P=0.280). This suggests that gameplay may be as effective as a lecture in terms of knowledge transfer. All the13 HCPs who participated in the game rated 4 out of 5 on the likert scale for the favourable learning experience and relevance of learning to their job, whereas only 8 out of 14 HCPs in the lecture reported a high rating in both aspects. 16. Conclusion: There is no known board game currently designed for diabetes training for HCPs.Evaluative data from future training can provide insights and direction to improve the game format and cover other aspects of diabetes management such as self-care, exercise, medications and insulin management. Further testing of the board game to ensure learning objectives are met is important and can assist in the development of awell-designed digital game as an alternative training approach during the COVID-19 pandemic. Learning through gameplay increases opportunities for HCPs to bond, interact and learn through games in a relaxed social setting and potentially brings more joy to the workplace.Keywords: active learning, game, diabetes, nutrition
Procedia PDF Downloads 175110 Epidemiological Patterns of Pediatric Fever of Unknown Origin
Authors: Arup Dutta, Badrul Alam, Sayed M. Wazed, Taslima Newaz, Srobonti Dutta
Abstract:
Background: In today's world, with modern science and contemporary technology, a lot of diseases may be quickly identified and ruled out, but children's fever of unknown origin (FUO) still presents diagnostic difficulties in clinical settings. Any fever that reaches 38 °C and lasts for more than seven days without a known cause is now classified as a fever of unknown origin (FUO). Despite tremendous progress in the medical sector, fever of unknown origin, or FOU, persists as a major health issue and a major contributor to morbidity and mortality, particularly in children, and its spectrum is sometimes unpredictable. The etiology is influenced by geographic location, age, socioeconomic level, frequency of antibiotic resistance, and genetic vulnerability. Since there are currently no known diagnostic algorithms, doctors are forced to evaluate each patient one at a time with extreme caution. A persistent fever poses difficulties for both the patient and the doctor. This prospective observational study was carried out in a Bangladeshi tertiary care hospital from June 2018 to May 2019 with the goal of identifying the epidemiological patterns of fever of unknown origin in pediatric patients. Methods: It was a hospital-based prospective observational study carried out on 106 children (between 2 months and 12 years) with prolonged fever of >38.0 °C lasting for more than 7 days without a clear source. Children with additional chronic diseases or known immunodeficiency problems were not allowed. Clinical practices that helped determine the definitive etiology were assessed. Initial testing included a complete blood count, a routine urine examination, PBF, a chest X-ray, CRP measurement, blood cultures, serology, and additional pertinent investigations. The analysis focused mostly on the etiological results. The standard program SPSS 21 was used to analyze all of the study data. Findings: A total of 106 patients identified as having FUO were assessed, with over half (57.5%) being female and the majority (40.6%) falling within the 1 to 3-year age range. The study categorized the etiological outcomes into five groups: infections, malignancies, connective tissue conditions, miscellaneous, and undiagnosed. In the group that was being studied, infections were found to be the main cause in 44.3% of cases. Undiagnosed cases came in at 31.1%, cancers at 10.4%, other causes at 8.5%, and connective tissue disorders at 4.7%. Hepato-splenomegaly was seen in people with enteric fever, malaria, acute lymphoid leukemia, lymphoma, and hepatic abscesses, either by itself or in combination with other conditions. About 53% of people who were not diagnosed also had hepato-splenomegaly at the same time. Conclusion: Infections are the primary cause of PUO (pyrexia of unknown origin) in children, with undiagnosed cases being the second most common cause. An incremental approach is beneficial in the process of diagnosing a condition. Non-invasive examinations are used to diagnose infections and connective tissue disorders, while invasive investigations are used to diagnose cancer and other ailments. According to this study, the prevalence of undiagnosed diseases is still remarkable, so extensive historical analysis and physical examinations are necessary in order to provide a precise diagnosis.Keywords: children, diagnostic challenges, fever of unknown origin, pediatric fever, undiagnosed diseases
Procedia PDF Downloads 31109 Investigation of Physical Properties of Asphalt Binder Modified by Recycled Polyethylene and Ground Tire Rubber
Authors: Sajjad H. Kasanagh, Perviz Ahmedzade, Alexander Fainleib, Taylan Gunay
Abstract:
Modification of asphalt is a fundamental method around the world mainly on the purpose of providing more durable pavements which lead to diminish repairing cost during the lifetime of highways. Various polymers such as styrene-butadiene-styrene (SBS) and ethylene vinyl acetate (EVA) make up the greater parts of the all-over asphalt modifiers generally providing better physical properties of asphalt by decreasing temperature dependency which eventually diminishes permanent deformation on highways such as rutting. However, some waste and low-cost materials such as recycled plastics and ground rubber tire have been attempted to utilize in asphalt as modifier instead of manufactured polymer modifiers due to decreasing the eventual highway cost. On the other hand, the usage of recycled plastics has become a worldwide requirement and awareness in order to decrease the pollution made by waste plastics. Hence, finding an area in which recycling plastics could be utilized has been targeted by many research teams so as to reduce polymer manufacturing and plastic pollution. To this end, in this paper, thermoplastic dynamic vulcanizate (TDV) obtained from recycled post-consumer polyethylene and ground tire rubber (GTR) were used to provide an efficient modifier for asphalt which decreases the production cost as well and finally might provide an ecological solution by decreasing polymer disposal problems. TDV was synthesized by the chemists in the research group by means of the abovementioned components that are considered as compatible physical characteristic of asphalt materials. TDV modified asphalt samples having different rate of proportions of 3, 4, 5, 6, 7 wt.% TDV modifier were prepared. Conventional tests, such as penetration, softening point and roll thin film oven (RTFO) tests were performed to obtain fundamental physical and aging properties of the base and modified binders. The high temperature performance grade (PG) of binders was determined by Superpave tests conducted on original and aged binders. The multiple stress creep and recovery (MSCR) test which is relatively up-to-date method for classifying asphalts taking account of their elasticity abilities was carried out to evaluate PG plus grades of binders. The results obtained from performance grading, and MSCR tests were also evaluated together so as to make a comparison between the methods both aiming to determine rheological parameters of asphalt. The test results revealed that TDV modification leads to a decrease in penetration, an increase in softening point, which proves an increasing stiffness of asphalt. DSR results indicate an improvement in PG for modified binders compared to base asphalt. On the other hand, MSCR results that are compatible with DSR results also indicate an enhancement on rheological properties of asphalt. However, according to the results, the improvement is not as distinct as observed in DSR results since elastic properties are fundamental in MSCR. At the end of the testing program, it can be concluded that TDV can be used as modifier which provides better rheological properties for asphalt and might diminish plastic waste pollution since the material is 100% recycled.Keywords: asphalt, ground tire rubber, recycled polymer, thermoplastic dynamic vulcanizate
Procedia PDF Downloads 220108 The Impact of ChatGPT on the Healthcare Domain: Perspectives from Healthcare Majors
Authors: Su Yen Chen
Abstract:
ChatGPT has shown both strengths and limitations in clinical, educational, and research settings, raising important concerns about accuracy, transparency, and ethical use. Despite an improved understanding of user acceptance and satisfaction, there is still a gap in how general AI perceptions translate into practical applications within healthcare. This study focuses on examining the perceptions of ChatGPT's impact among 266 healthcare majors in Taiwan, exploring its implications for their career development, as well as its utility in clinical practice, medical education, and research. By employing a structured survey with precisely defined subscales, this research aims to probe the breadth of ChatGPT's applications within healthcare, assessing both the perceived benefits and the challenges it presents. Additionally, to further enhance the comprehensiveness of our methodology, we have incorporated qualitative data collection methods, which provide complementary insights to the quantitative findings. The findings from the survey reveal that perceptions and usage of ChatGPT among healthcare majors vary significantly, influenced by factors such as its perceived utility, risk, novelty, and trustworthiness. Graduate students and those who perceive ChatGPT as more beneficial and less risky are particularly inclined to use it more frequently. This increased usage is closely linked to significant impacts on personal career development. Furthermore, ChatGPT's perceived usefulness and novelty contribute to its broader impact within the healthcare domain, suggesting that both innovation and practical utility are key drivers of acceptance and perceived effectiveness in professional healthcare settings. Trust emerges as an important factor, especially in clinical settings where the stakes are high. The trust that healthcare professionals place in ChatGPT significantly affects its integration into clinical practice and influences outcomes in medical education and research. The reliability and practical value of ChatGPT are thus critical for its successful adoption in these areas. However, an interesting paradox arises with regard to the ease of use. While making ChatGPT more user-friendly is generally seen as beneficial, it also raises concerns among users who have lower levels of trust and perceive higher risks associated with its use. This complex interplay between ease of use and safety concerns necessitates a careful balance, highlighting the need for robust security measures and clear, transparent communication about how AI systems work and their limitations. The study suggests several strategic approaches to enhance the adoption and integration of AI in healthcare. These include targeted training programs for healthcare professionals to increase familiarity with AI technologies, reduce perceived risks, and build trust. Ensuring transparency and conducting rigorous testing are also vital to foster trust and reliability. Moreover, comprehensive policy frameworks are needed to guide the implementation of AI technologies, ensuring high standards of patient safety, privacy, and ethical use. These measures are crucial for fostering broader acceptance of AI in healthcare, as the study contributes to enriching the discourse on AI's role by detailing how various factors affect its adoption and impact.Keywords: ChatGPT, healthcare, survey study, IT adoption, behaviour, applcation, concerns
Procedia PDF Downloads 32107 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids
Authors: Laith F. Gulli, Nicole M. Mallory
Abstract:
The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.Keywords: psychometric, parametric, domains, grids, therapists
Procedia PDF Downloads 279106 Creation of a Test Machine for the Scientific Investigation of Chain Shot
Authors: Mark McGuire, Eric Shannon, John Parmigiani
Abstract:
Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.Keywords: chain shot, safety, testing, timber harvesters
Procedia PDF Downloads 153105 Medical Examiner Collection of Comprehensive, Objective Medical Evidence for Conducted Electrical Weapons and Their Temporal Relationship to Sudden Arrest
Authors: Michael Brave, Mark Kroll, Steven Karch, Charles Wetli, Michael Graham, Sebastian Kunz, Dorin Panescu
Abstract:
Background: Conducted electrical weapons (CEW) are now used in 107 countries and are a common law enforcement less-lethal force practice in the United Kingdom (UK), United States of America (USA), Canada, Australia, New Zealand, and others. Use of these devices is rarely temporally associated with the occurrence of sudden arrest-related deaths (ARD). Because such deaths are uncommon, few Medical Examiners (MEs) ever encounter one, and even fewer offices have established comprehensive investigative protocols. Without sufficient scientific data, the role, if any, played by a CEW in a given case is largely supplanted by conjecture often defaulting to a CEW-induced fatal cardiac arrhythmia. In addition to the difficulty in investigating individual deaths, the lack of information also detrimentally affects being able to define and evaluate the ARD cohort generally. More comprehensive, better information leads to better interpretation in individual cases and also to better research. The purpose of this presentation is to provide MEs with a comprehensive evidence-based checklist to assist in the assessment of CEW-ARD cases. Methods: PUBMED and Sociology/Criminology data bases were queried to find all medical, scientific, electrical, modeling, engineering, and sociology/criminology peer-reviewed literature for mentions of CEW or synonymous terms. Each paper was then individually reviewed to identify those that discussed possible bioelectrical mechanisms relating CEW to ARD. A Naranjo-type pharmacovigilance algorithm was also employed, when relevant, to identify and quantify possible direct CEW electrical myocardial stimulation. Additionally, CEW operational manuals and training materials were reviewed to allow incorporation of CEW-specific technical parameters. Results: Total relevant PUBMED citations of CEWs were less than 250, and reports of death extremely rare. Much relevant information was available from Sociology/Criminology data bases. Once the relevant published papers were identified, and reviewed, we compiled an annotated checklist of data that we consider critical to a thorough CEW-involved ARD investigation. Conclusion: We have developed an evidenced-based checklist that can be used by MEs and their staffs to assist them in identifying, collecting, documenting, maintaining, and objectively analyzing the role, if any, played by a CEW in any specific case of sudden death temporally associated with the use of a CEW. Even in cases where the collected information is deemed by the ME as insufficient for formulating an opinion or diagnosis to a reasonable degree of medical certainty, information collected as per the checklist will often be adequate for other stakeholders to use as a basis for informed decisions. Having reviewed the appropriate materials in a significant number of cases careful examination of the heart and brain is likely adequate. Channelopathy testing should be considered in some cases, however it may be considered cost prohibitive (aprox $3000). Law enforcement agencies may want to consider establishing a reserve fund to help manage such rare cases. The expense may stay the enormous costs associated with incident-precipitated litigation.Keywords: ARD, CEW, police, TASER
Procedia PDF Downloads 347104 Hydrodynamic Characterisation of a Hydraulic Flume with Sheared Flow
Authors: Daniel Rowe, Christopher R. Vogel, Richard H. J. Willden
Abstract:
The University of Oxford’s recirculating water flume is a combined wave and current test tank with a 1 m depth, 1.1 m width, and 10 m long working section, and is capable of flow speeds up to 1 ms−1 . This study documents the hydrodynamic characteristics of the facility in preparation for experimental testing of horizontal axis tidal stream turbine models. The turbine to be tested has a rotor diameter of 0.6 m and is a modified version of one of two model-scale turbines tested in previous experimental campaigns. An Acoustic Doppler Velocimeter (ADV) was used to measure the flow at high temporal resolution at various locations throughout the flume, enabling the spatial uniformity and turbulence flow parameters to be investigated. The mean velocity profiles exhibited high levels of spatial uniformity at the design speed of the flume, 0.6 ms−1 , with variations in the three-dimensional velocity components on the order of ±1% at the 95% confidence level, along with a modest streamwise acceleration through the measurement domain, a target 5 m working section of the flume. A high degree of uniformity was also apparent for the turbulence intensity, with values ranging between 1-2% across the intended swept area of the turbine rotor. The integral scales of turbulence exhibited a far higher degree of variation throughout the water column, particularly in the streamwise and vertical scales. This behaviour is believed to be due to the high signal noise content leading to decorrelation in the sampling records. To achieve more realistic levels of vertical velocity shear in the flume, a simple procedure to practically generate target vertical shear profiles in open-channel flows is described. Here, the authors arranged a series of non-uniformly spaced parallel bars placed across the width of the flume and normal to the onset flow. By adjusting the resistance grading across the height of the working section, the downstream profiles could be modified accordingly, characterised by changes in the velocity profile power law exponent, 1/n. Considering the significant temporal variation in a tidal channel, the choice of the exponent denominator, n = 6 and n = 9, effectively provides an achievable range around the much-cited value of n = 7 observed at many tidal sites. The resulting flow profiles, which we intend to use in future turbine tests, have been characterised in detail. The results indicate non-uniform vertical shear across the survey area and reveal substantial corner flows, arising from the differential shear between the target vertical and cross-stream shear profiles throughout the measurement domain. In vertically sheared flow, the rotor-equivalent turbulence intensity ranges between 3.0-3.8% throughout the measurement domain for both bar arrangements, while the streamwise integral length scale grows from a characteristic dimension on the order of the bar width, similar to the flow downstream of a turbulence-generating grid. The experimental tests are well-defined and repeatable and serve as a reference for other researchers who wish to undertake similar investigations.Keywords: acoustic doppler Velocimeter, experimental hydrodynamics, open-channel flow, shear profiles, tidal stream turbines
Procedia PDF Downloads 88103 A Clinico-Bacteriological Study and Their Risk Factors for Diabetic Foot Ulcer with Multidrug-Resistant Microorganisms in Eastern India
Authors: Pampita Chakraborty, Sukumar Mukherjee
Abstract:
This study was done to determine the bacteriological profile and antibiotic resistance of the isolates and to find out the potential risk factors for infection with multidrug-resistant organisms. Diabetic foot ulcer is a major medical, social, economic problem and a leading cause of morbidity and mortality, especially in the developing countries like India. 25 percent of all diabetic patients develop a foot ulcer at some point in their lives which is highly susceptible to infections and that spreads rapidly, leading to overwhelming tissue destruction and subsequent amputation. Infection with multidrug resistant organisms (MDRO) may increase the cost of management and may cause additional morbidity and mortality. Proper management of these infections requires appropriate antibiotic selection based on culture and antimicrobial susceptibility testing. Early diagnosis of microbial infections is aimed to institute the appropriate antibacterial therapy initiative to avoid further complications. A total of 200 Type 2 Diabetic Mellitus patients with infection were admitted at GD Hospital and Diabetes Institute, Kolkata. 60 of them who developed ulcer during the year 2013 were included in this study. A detailed clinical history and physical examination were carried out for every subject. Specimens for microbiological studies were obtained from ulcer region. Gram-negative bacilli were tested for extended spectrum Beta-lactamase (ESBL) production by double disc diffusion method. Staphylococcal isolates were tested for susceptibility to oxacillin by screen agar method and disc diffusion. Potential risk factors for MDRO-positive samples were explored. Gram-negative aerobes were most frequently isolated, followed by gram-positive aerobes. Males were predominant in the study and majority of the patients were in the age group of 41-60 years. The presence of neuropathy was observed in 80% cases followed by peripheral vascular disease (73%). Proteus spp. (22) was the most common pathogen isolated, followed by E.coli (17). Staphylococcus aureus was predominant amongst the gram-positive isolates. S.aureus showed a high rate of resistance to antibiotic tested (63.6%). Other gram-positive isolates were found to be highly resistant to erythromycin, tetracycline and ciprofloxacin, 40% each. All isolates were found to be sensitive to Vancomycin and Linezolid. ESBL production was noted in Proteus spp and E.coli. Approximately 70 % of the patients were positive for MDRO. MDRO-infected patients had poor glycemic control (HbA1c 11± 2). Infection with MDROs is common in diabetic foot ulcers and is associated with risk factors like inadequate glycemic control, the presence of neuropathy, osteomyelitis, ulcer size and increased the requirement for surgical treatment. There is a need for continuous surveillance of resistant bacteria to provide the basis for empirical therapy and reduce the risk of complications.Keywords: diabetic foot ulcer, bacterial infection, multidrug-resistant organism, extended spectrum beta-lactamase
Procedia PDF Downloads 337102 (Anti)Depressant Effects of Non-Steroidal Antiinflammatory Drugs in Mice
Authors: Horia Păunescu
Abstract:
Purpose: The study aimed to assess the depressant or antidepressant effects of several Nonsteroidal Anti-Inflammatory Drugs (NSAIDs) in mice: the selective cyclooxygenase-2 (COX-2) inhibitor meloxicam, and the non-selective COX-1 and COX-2 inhibitors lornoxicam, sodium metamizole, and ketorolac. The current literature data regarding such effects of these agents are scarce. Materials and methods: The study was carried out on NMRI mice weighing 20-35 g, kept in a standard laboratory environment. The study was approved by the Ethics Committee of the University of Medicine and Pharmacy „Carol Davila”, Bucharest. The study agents were injected intraperitoneally, 10 mL/kg body weight (bw) 1 hour before the assessment of the locomotor activity by cage testing (n=10 mice/ group) and 2 hours before the forced swimming tests (n=15). The study agents were dissolved in normal saline (meloxicam, sodium metamizole), ethanol 11.8% v/v in normal saline (ketorolac), or water (lornoxicam), respectively. Negative and positive control agents were also given (amitryptilline in the forced swimming test). The cage floor used in the locomotor activity assessment was divided into 20 equal 10 cm squares. The forced swimming test involved partial immersion of the mice in cylinders (15/9cm height/diameter) filled with water (10 cm depth at 28C), where they were left for 6 minutes. The cage endpoint used in the locomotor activity assessment was the number of treaded squares. Four endpoints were used in the forced swimming test (immobility latency for the entire 6 minutes, and immobility, swimming, and climbing scores for the final 4 minutes of the swimming session), recorded by an observer that was "blinded" to the experimental design. The statistical analysis used the Levene test for variance homogeneity, ANOVA and post-hoc analysis as appropriate, Tukey or Tamhane tests.Results: No statistically significant increase or decrease in the number of treaded squares was seen in the locomotor activity assessment of any mice group. In the forced swimming test, amitryptilline showed an antidepressant effect in each experiment, at the 10 mg/kg bw dosage. Sodium metamizole was depressant at 100 mg/kg bw (increased the immobility score, p=0.049, Tamhane test), but not in lower dosages as well (25 and 50 mg/kg bw). Ketorolac showed an antidepressant effect at the intermediate dosage of 5 mg/kg bw, but not so in the dosages of 2.5 and 10 mg/kg bw, respectively (increased the swimming score, p=0.012, Tamhane test). Meloxicam and lornoxicam did not alter the forced swimming endpoints at any dosage level. Discussion: 1) Certain NSAIDs caused changes in the forced swimming patterns without interfering with locomotion. 2) Sodium metamizole showed a depressant effect, whereas ketorolac proved antidepressant. Conclusion: NSAID-induced mood changes are not class effects of these agents and apparently are independent of the type of inhibited cyclooxygenase (COX-1 or COX-2). Disclosure: This paper was co-financed from the European Social Fund, through the Sectorial Operational Programme Human Resources Development 2007-2013, project number POSDRU /159 /1.5 /S /138907 "Excellence in scientific interdisciplinary research, doctoral and postdoctoral, in the economic, social and medical fields -EXCELIS", coordinator The Bucharest University of Economic Studies.Keywords: antidepressant, depressant, forced swim, NSAIDs
Procedia PDF Downloads 235101 Applications of Polyvagal Theory for Trauma in Clinical Practice: Auricular Acupuncture and Herbology
Authors: Aurora Sheehy, Caitlin Prince
Abstract:
Within current orthodox medical protocols, trauma and mental health issues are deemed to reside within the realm of cognitive or psychological therapists and are marginalised in these areas, in part due to limited drugs option available, mostly manipulating neurotransmitters or sedating patients to reduce symptoms. By contrast, this research presents examples from the clinical practice of how trauma can be assessed and treated physiologically. Adverse Childhood Experiences (ACEs) are a tally of different types of abuse and neglect. It has been used as a measurable and reliable predictor of the likelihood of the development of autoimmune disease. It is a direct way to demonstrate reliably the health impact of traumatic life experiences. A second assessment tool is Allostatic Load, which refers to the cumulative effects that chronic stress has on mental and physical health. It records the decline of an individual’s physiological capacity to cope with their experience. It uses a specific grouping of serum testing and physical measures. It includes an assessment of neuroendocrine, cardiovascular, immune and metabolic systems. Allostatic load demonstrates the health impact that trauma has throughout the body. It forms part of an initial intake assessment in clinical practice and could also be used in research to evaluate treatment. Examining medicinal plants for their physiological, neurological and somatic effects through the lens of Polyvagal theory offers new opportunities for trauma treatments. In situations where Polyvagal theory recommends activities and exercises to enable parasympathetic activation, many herbs that affect Effector Memory T (TEM) cells also enact these responses. Traditional or Indigenous European herbs show the potential to support the polyvagal tone, through multiple mechanisms. As the ventral vagal nerve reaches almost every major organ, plants that have actions on these tissues can be understood via their polyvagal actions, such as monoterpenes as agents to improve respiratory vagal tone, cyanogenic glycosides to reset polyvagal tone, volatile oils rich in phenyl methyl esters improve both sympathetic and parasympathetic tone, bitters activate gut function and can strongly promote parasympathetic regulation. Auricular Acupuncture uses a system of somatotopic mapping of the auricular surface overlaid with an image of an inverted foetus with each body organ and system featured. Given that the concha of the auricle is the only place on the body where the Vagus Nerve neurons reach the surface of the skin, several investigators have evaluated non-invasive, transcutaneous electrical nerve stimulation (TENS) at auricular points. Drawn from an interdisciplinary evidence base and developed through clinical practice, these assessment and treatment tools are examples of practitioners in the field innovating out of necessity for the best outcomes for patients. This paper draws on case studies to direct future research.Keywords: polyvagal, auricular acupuncture, trauma, herbs
Procedia PDF Downloads 93100 qPCR Method for Detection of Halal Food Adulteration
Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik
Abstract:
Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).Keywords: food fraud, halal food, pork, qPCR
Procedia PDF Downloads 24799 Exposing The Invisible
Authors: Kimberley Adamek
Abstract:
According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.Keywords: wind engineering, climate, visualization, architectural aerodynamics
Procedia PDF Downloads 36198 Emerging Identities: A Transformative ‘Green Zone’
Authors: Alessandra Swiny, Yiorgos Hadjichristou
Abstract:
There exists an on-going geographical scar creating a division through the Island of Cyprus and its capital, Nicosia. The currently amputated city center is accessed legally by the United Nations convoys, infiltrated only by Turkish and Greek Cypriot army scouts and illegal traders and scavengers. On Christmas day 1963 in Nicosia, Captain M. Hobden of the British Army took a green chinagraph pencil and on a large scale Joint Army-RAF map ‘marked’ the division. From then on this ‘buffer zone’ was called the ‘green line.' This once dividing form, separating the main communities of Greek and Turkish Cypriots from one another, has now been fully reclaimed by an autonomous intruder. It's currently most captivating inhabitant is nature. She keeps taking over, for the past fifty years indigenous and introduced fauna and flora thrive; trees emerge from rooftops and plants, bushes and flowers grow randomly through the once bustling market streets, allowing this ‘no man’s land’ to teem with wildlife. And where are its limits? The idea of fluidity is ever present; it encroaches into the urban and built environment that surrounds it, and notions of ownership and permanence are questioned. Its qualities have contributed significantly in the search for new ‘identities,' expressed in the emergence of new living conditions, be they real or surreal. Without being physically reachable, it can be glimpsed at through punctured peepholes, military bunker windows that act as enticing portals into an emotional and conceptual level of inhabitation. The zone is mystical and simultaneously suspended in time, it triggers people’s imagination, not just that of the two prevailing communities but also of immigrants, refugees, and visitors; it mesmerizes all who come within its proximity. The paper opens a discussion on the issues and the binary questions raised. What is natural and artificial; what is private and public; what is ephemeral and permanent? The ‘green line’ exists in a central fringe condition and can serve in mixing generations and groups of people; mingling functions of living with work and social interaction; merging nature and the human being in a new-found synergy of human hope and survival, allowing thus for new notions of place to be introduced. Questions seek to be answered, such as, “Is the impossibility of dwelling made possible, by interweaving these ‘in-between conditions’ into eloquently traced spaces?” The methodologies pursued are developed through academic research, professional practice projects, and students’ research/design work. Realized projects, case studies and other examples cited both nationally and internationally hold global and local applications. Both paths of the research deal with the explorative understanding of the impossibility of dwelling, testing the limits of its autonomy. The expected outcome of the experience evokes in the user a sense of a new urban landscape, created from human topographies that echo the voice of an emerging identity.Keywords: urban wildlife, human topographies, buffer zone, no man’s land
Procedia PDF Downloads 19997 Effect of Long Term Orientation and Indulgence on Earnings Management: The Moderating Role of Legal Tradition
Authors: I. Martinez-Conesa, E. Garcia-Meca, M. Barradas-Quiroz
Abstract:
The objective of this study is to assess the impact on earnings management of latest two Hofstede cultural dimensions: long-term orientation and indulgence. Long-term orientation represents the alignment of a society towards the future and indulgence expresses the extent to which a society exhibits willingness, or restrain, to realise their impulses. Additionally, this paper tests if there are relevant differences by testing the moderating role of the legal tradition, Continental versus Anglo-Saxon. Our sample comprises 15 countries: Belgium, Canada, Germany, Spain, France, Great Britain, Hong Kong, India, Japan, Korea, Netherlands, Philippines, Portugal, Sweden, and Thailand, with a total of 12,936 observations from 2003 to 2013. Our results show that managers in countries with high levels of long-term orientation reduce their levels of discretionary accruals. The findings do not confirm the effect of indulgence on earnings management. In addition, our results confirm previous literature regarding the effect of individualism, noting that firms in countries with high levels of collectivism might be more inclined to use earnings discretion to protect the welfare of the collective group of firm stakeholders. Uncertainty avoidance results in downwards earnings management as well as high disclosure, suggesting that less manipulation takes place when transparency is higher. Indulgence is the cultural dimension that confronts wellbeing versus survival; dimension is formulated including happiness, the perception of live control and the importance of leisure. Indulgence shows a weak negative correlation with power distance indicating a slight tendency for more hierarchical societies to be less indulgent. Anglo-Saxon countries are a positive effect of individualism and a negative effect of masculinity, uncertainty avoidance, and disclosure. With respect to continental countries, we can see a significant and positive effect of individualism and a significant and negative effect of masculinity, long-term orientation, and indulgence. Therefore, we observe the negative effect on earnings management provoked by higher disclosure and uncertainty avoidance only happens in Anglo-Saxon countries. Meanwhile, the improvement in reporting quality motivated by higher long-term orientation and higher indulgence is dominant in Continental countries. Our results confirm that there is a moderating effect of the legal system in the association between culture and earnings management. This effect is especially relevant in the dimensions related to uncertainty avoidance, long term orientation, indulgence, and disclosure. The negative effect of long-term orientation on earnings management only happens in those countries set in continental legal systems because of the Anglo-Saxon legal systems is supported by the decisions of the courts and the traditions, so it already has long-term orientation. That does not occur in continental systems, depending mainly of contend of the law. Sensitivity analysis used with Jones modified CP model, Jones Standard model and Jones Standard CP model confirm the robustness of these results. This paper collaborates towards a better understanding on how earnings management, culture and legal systems relate to each other, and contribute to previous literature by examining the influence of the two latest Hofstede’s dimensions not previously studied in papers.Keywords: Hofstede, long-term-orientation, earnings management, indulgence
Procedia PDF Downloads 24096 Tensile Behaviours of Sansevieria Ehrenbergii Fiber Reinforced Polyester Composites with Water Absorption Time
Authors: T. P. Sathishkumar, P. Navaneethakrishnan
Abstract:
The research work investigates the variation of tensile properties for the sansevieria ehrenbergii fiber (SEF) and SEF reinforced polyester composites respect to various water absorption time. The experiments were conducted according to ATSM D3379-75 and ASTM D570 standards. The percentage of water absorption for composite specimens was measured according to ASTM D570 standard. The fiber of SE was cut in to 30 mm length for preparation of the composites. The simple hand lay-up method followed by compression moulding process adopted to prepare the randomly oriented SEF reinforced polyester composites at constant fiber weight fraction of 40%. The surface treatment was done on the SEFs with various chemicals such as NaOH, KMnO4, Benzoyl Peroxide, Benzoyl Chloride and Stearic Acid before preparing the composites. NaOH was used for pre-treatment of all other chemical treatments. The morphology of the tensile fractured specimens studied using the Scanning Electron Microscopic. The tensile strength of the SEF and SEF reinforced polymer composites were carried out with various water absorption time such as 4, 8, 12, 16, 20 and 24 hours respectively. The result shows that the tensile strength was drop off with increase in water absorption time for all composites. The highest tensile property of raw fiber was found due to lowest moistures content. Also the chemical bond between the cellulose and cementic materials such as lignin and wax was highest due to lowest moisture content. Tensile load was lowest and elongation was highest for the water absorbed fibers at various water absorption time ranges. During this process, the fiber cellulose inhales the water and expands the primary and secondary fibers walls. This increases the moisture content in the fibers. Ultimately this increases the hydrogen cation and the hydroxide anion from the water. In tensile testing, the water absorbed fibers shows highest elongation by stretching of expanded cellulose walls and the bonding strength between the fiber cellulose is low. The load carrying capability was stable at 20 hours of water absorption time. This could be directly affecting the interfacial bonding between the fiber/matrix and composite strength. The chemically treated fibers carry higher load and lower elongation which is due to removal of lignin, hemicellulose and wax content. The water time absorption decreases the tensile strength of the composites. The chemically SEF reinforced composites shows highest tensile strength compared to untreated SEF reinforced composites. This was due to highest bonding area between the fiber/matrix. This was proven in the morphology at the fracture zone of the composites. The intra-fiber debonding was occurred by water capsulation in the fiber cellulose. Among all, the tensile strength was found to be highest for KMnO4 treated SEF reinforced composite compared to other composites. This was due to better interfacial bonding between the fiber-matrix compared to other treated fiber composites. The percentage of water absorption of composites increased with time of water absorption. The percentage weight gain of chemically treated SEF composites at 4 hours to zero water absorption are 9, 9, 10, 10.8 and 9.5 for NaOH, BP, BC, KMnO4 and SA respectively. The percentage weight gain of chemically treated SEF composites at 24 hours to zero water absorption 5.2, 7.3, 12.5, 16.7 and 13.5 for NaOH, BP, BC, KMnO4 and SA respectively. Hence the lowest weight gain was found for KMnO4 treated SEF composites by highest percentage with lowest water uptake. However the chemically treated SEF reinforced composites is possible materials for automotive application like body panels, bumpers and interior parts, and household application like tables and racks etc.Keywords: fibres, polymer-matrix composites (PMCs), mechanical properties, scanning electron microscopy (SEM)
Procedia PDF Downloads 41095 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks
Authors: Nicholas Aerne, John P. Parmigiani
Abstract:
There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply
Procedia PDF Downloads 20594 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector
Authors: Sanaz Moayer, Fang Huang, Scott Gardner
Abstract:
In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management
Procedia PDF Downloads 41693 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 24292 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 34191 Application of Alumina-Aerogel in Post-Combustion CO₂ Capture: Optimization by Response Surface Methodology
Authors: S. Toufigh Bararpour, Davood Karami, Nader Mahinpey
Abstract:
Dependence of global economics on fossil fuels has led to a large growth in the emission of greenhouse gases (GHGs). Among the various GHGs, carbon dioxide is the main contributor to the greenhouse effect due to its huge emission amount. To mitigate the threatening effect of CO₂, carbon capture and sequestration (CCS) technologies have been studied widely in recent years. For the combustion processes, three main CO₂ capture techniques have been proposed such as post-combustion, pre-combustion and oxyfuel combustion. Post-combustion is the most commonly used CO₂ capture process as it can be readily retrofit into the existing power plants. Multiple advantages have been reported for the post-combustion by solid sorbents such as high CO₂ selectivity, high adsorption capacity, and low required regeneration energy. Chemical adsorption of CO₂ over alkali-metal-based solid sorbents such as K₂CO₃ is a promising method for the selective capture of diluted CO₂ from the huge amount of nitrogen existing in the flue gas. To improve the CO₂ capture performance, K₂CO₃ is supported by a stable and porous material. Al₂O₃ has been employed commonly as the support and enhanced the cyclic CO₂ capture efficiency of K₂CO₃. Different phases of alumina can be obtained by setting the calcination temperature of boehmite at 300, 600 (γ-alumina), 950 (δ-alumina) and 1200 °C (α-alumina). By increasing the calcination temperature, the regeneration capacity of alumina increases, while the surface area reduces. However, sorbents with lower surface areas have lower CO₂ capture capacity as well (except for the sorbents prepared by hydrophilic support materials). To resolve this issue, a highly efficient alumina-aerogel support was synthesized with a BET surface area of over 2000 m²/g and then calcined at a high temperature. The synthesized alumina-aerogel was impregnated on K₂CO₃ based on 50 wt% support/K₂CO₃, which resulted in the preparation of a sorbent with remarkable CO₂ capture performance. The effect of synthesis conditions such as types of alcohols, solvent-to-co-solvent ratios, and aging times was investigated on the performance of the support. The best support was synthesized using methanol as the solvent, after five days of aging time, and at a solvent-to-co-solvent (methanol-to-toluene) ratio (v/v) of 1/5. Response surface methodology was used to investigate the effect of operating parameters such as carbonation temperature and H₂O-to-CO₂ flowrate ratio on the CO₂ capture capacity. The maximum CO₂ capture capacity, at the optimum amounts of operating parameters, was 7.2 mmol CO₂ per gram K₂CO₃. Cyclic behavior of the sorbent was examined over 20 carbonation and regenerations cycles. The alumina-aerogel-supported K₂CO₃ showed a great performance compared to unsupported K₂CO₃ and γ-alumina-supported K₂CO₃. Fundamental performance analyses and long-term thermal and chemical stability test will be performed on the sorbent in the future. The applicability of the sorbent for a bench-scale process will be evaluated, and a corresponding process model will be established. The fundamental material knowledge and respective process development will be delivered to industrial partners for the design of a pilot-scale testing unit, thereby facilitating the industrial application of alumina-aerogel.Keywords: alumina-aerogel, CO₂ capture, K₂CO₃, optimization
Procedia PDF Downloads 11690 Bacterial Exposure and Microbial Activity in Dental Clinics during Cleaning Procedures
Authors: Atin Adhikari, Sushma Kurella, Pratik Banerjee, Nabanita Mukherjee, Yamini M. Chandana Gollapudi, Bushra Shah
Abstract:
Different sharp instruments, drilling machines, and high speed rotary instruments are routinely used in dental clinics during dental cleaning. Therefore, these cleaning procedures release a lot of oral microorganisms including bacteria in clinic air and may cause significant occupational bioaerosol exposure risks for dentists, dental hygienists, patients, and dental clinic employees. Two major goals of this study were to quantify volumetric airborne concentrations of bacteria and to assess overall microbial activity in this type of occupational environment. The study was conducted in several dental clinics of southern Georgia and 15 dental cleaning procedures were targeted for sampling of airborne bacteria and testing of overall microbial activity in settled dusts over clinic floors. For air sampling, a Biostage viable cascade impactor was utilized, which comprises an inlet cone, precision-drilled 400-hole impactor stage, and a base that holds an agar plate (Tryptic soy agar). A high-flow Quick-Take-30 pump connected to this impactor pulls microorganisms in air at 28.3 L/min flow rate through the holes (jets) where they are collected on the agar surface for approx. five minutes. After sampling, agar plates containing the samples were placed in an ice chest with blue ice and plates were incubated at 30±2°C for 24 to 72 h. Colonies were counted and converted to airborne concentrations (CFU/m3) followed by positive hole corrections. Most abundant bacterial colonies (selected by visual screening) were identified by PCR amplicon sequencing of 16S rRNA genes. For understanding overall microbial activity in clinic floors and estimating a general cleanliness of the clinic surfaces during or after dental cleaning procedures, ATP levels were determined in swabbed dust samples collected from 10 cm2 floor surfaces. Concentration of ATP may indicate both the cell viability and the metabolic status of settled microorganisms in this situation. An ATP measuring kit was used, which utilized standard luciferin-luciferase fluorescence reaction and a luminometer, which quantified ATP levels as relative light units (RLU). Three air and dust samples were collected during each cleaning procedure (at the beginning, during cleaning, and immediately after the procedure was completed (n = 45). Concentrations at the beginning, during, and after dental cleaning procedures were 671±525, 917±1203, and 899±823 CFU/m3, respectively for airborne bacteria and 91±101, 243±129, and 139±77 RLU/sample, respectively for ATP levels. The concentrations of bacteria were significantly higher than typical indoor residential environments. Although an increasing trend for airborne bacteria was observed during cleaning, the data collected at three different time points were not significantly different (ANOVA: p = 0.38) probably due to high standard deviations of data. The ATP levels, however, demonstrated a significant difference (ANOVA: p <0.05) in this scenario indicating significant change in microbial activity on floor surfaces during dental cleaning. The most common bacterial genera identified were: Neisseria sp., Streptococcus sp., Chryseobacterium sp., Paenisporosarcina sp., and Vibrio sp. in terms of frequencies of occurrences, respectively. The study concluded that bacterial exposure in dental clinics could be a notable occupational biohazard, and appropriate respiratory protections for the employees are urgently needed.Keywords: bioaerosols, hospital hygiene, indoor air quality, occupational biohazards
Procedia PDF Downloads 31289 Early Initiation of Breastfeeding and Its Determinants among Non-Caesarean Deliveries at Primary and Secondary Health Facilities: A Case Observational Study
Authors: Farhana Karim, Abdullah N. S. Khan, Mohiuddin A. K. Chowdhury, Nabila Zaka, Alexander Manu, Shams El Arifeen, Sk Masum Billah
Abstract:
Breastfeeding, an integral part of newborn care, can reduce 55-87% of all-cause neonatal mortality and morbidity. Early initiation of breastfeeding within 1 hour of birth can avert 22% of newborn mortality. Only 45% of world’s newborns and 42% of newborns in South-Asia are put to the breast within one hour of birth. In Bangladesh, only a half of the mothers practice early initiation of breastfeeding which is less likely to be practiced if the baby is born in a health facility. This study aims to generate strong evidence for early initiation of breastfeeding practices in the government health facilities and to explore the associated factors influencing the practice. The study was conducted in selected health facilities in three neighbouring districts of Northern Bangladesh. Total 249 normal vaginal delivery cases were observed for 24 hours since the time of birth. The outcome variable was initiation of breastfeeding within 1 hour while the explanatory variables included type of health facility, privacy, presence of support person, stage of labour at admission, need for augmentation of labour, complications during delivery, need for episiotomy, spontaneous cry of the newborn, skin-to-skin contact with mother, post-natal contact with the service provider, receiving a post-natal examination and counselling on breastfeeding during postnatal contact. The simple descriptive statistics were employed to see the distribution of samples according to socio-demographic characteristics. Kruskal-Wallis test was carried out for testing the equality of medians among two or more categories of each variable and P-value is reported. A series of simple logistic regressions were conducted with all the potential explanatory variables to identify the determining factors for breastfeeding within 1 hour in a health facility. Finally, multiple logistic regression was conducted including the variables found significant at bi-variate analyses. Almost 90% participants initiated breastfeeding at the health facility and median time to initiate breastfeeding was 38 minutes. However, delivering in a sub-district hospital significantly delayed the breastfeeding initiation in comparison to delivering in a district hospital. Maintenance of adequate privacy and presence of separate staff for taking care of newborn significantly reduced the time in early breastfeeding initiation. Initiation time was found longer if the mother had an augmented labour, obstetric complications, and the newborn needed resuscitation. However, the initiation time was significantly early if the baby was put skin-to-skin on mother’s abdomen and received a postnatal examination by a provider. After controlling for the potential confounders, the odds of initiating breastfeeding within one hour of birth is higher if mother gives birth in a district hospital (AOR 3.0: 95% CI 1.5, 6.2), privacy is well-maintained (AOR 2.3: 95% CI 1.1, 4.5), babies cry spontaneously (AOR 7.7: 95% CI 3.3, 17.8), babies are put to skin-to-skin contact with mother (AOR 4.6: 95% CI 1.9, 11.2) and if the baby is examined by a provider in the facility (AOR 4.4: 95% CI 1.4, 14.2). The evidence generated by this study will hopefully direct the policymakers to identify and prioritize the scopes for creating and supporting early initiation of breastfeeding in the health facilities.Keywords: Bangladesh, early initiation of breastfeeding, health facility, normal vaginal delivery, skin to skin contact
Procedia PDF Downloads 15488 Functional Plasma-Spray Ceramic Coatings for Corrosion Protection of RAFM Steels in Fusion Energy Systems
Authors: Chen Jiang, Eric Jordan, Maurice Gell, Balakrishnan Nair
Abstract:
Nuclear fusion, one of the most promising options for reliably generating large amounts of carbon-free energy in the future, has seen a plethora of ground-breaking technological advances in recent years. An efficient and durable “breeding blanket”, needed to ensure a reactor’s self-sufficiency by maintaining the optimal coolant temperature as well as by minimizing radiation dosage behind the blanket, still remains a technological challenge for the various reactor designs for commercial fusion power plants. A relatively new dual-coolant lead-lithium (DCLL) breeder design has exhibited great potential for high-temperature (>700oC), high-thermal-efficiency (>40%) fusion reactor operation. However, the structural material, namely reduced activation ferritic-martensitic (RAFM) steel, is not chemically stable in contact with molten Pb-17%Li coolant. Thus, to utilize this new promising reactor design, the demand for effective corrosion-resistant coatings on RAFM steels represents a pressing need. Solution Spray Technologies LLC (SST) is developing a double-layer ceramic coating design to address the corrosion protection of RAFM steels, using a novel solution and solution/suspension plasma spray technology through a US Department of Energy-funded project. Plasma spray is a coating deposition method widely used in many energy applications. Novel derivatives of the conventional powder plasma spray process, known as the solution-precursor and solution/suspension-hybrid plasma spray process, are powerful methods to fabricate thin, dense ceramic coatings with complex compositions necessary for the corrosion protection in DCLL breeders. These processes can be used to produce ultra-fine molten splats and to allow fine adjustment of coating chemistry. Thin, dense ceramic coatings with chosen chemistry for superior chemical stability in molten Pb-Li, low activation properties, and good radiation tolerance, is ideal for corrosion-protection of RAFM steels. A key challenge is to accommodate its CTE mismatch with the RAFM substrate through the selection and incorporation of appropriate bond layers, thus allowing for enhanced coating durability and robustness. Systematic process optimization is being used to define the optimal plasma spray conditions for both the topcoat and bond-layer, and X-ray diffraction and SEM-EDS are applied to successfully validate the chemistry and phase composition of the coatings. The plasma-sprayed double-layer corrosion resistant coatings were also deposited onto simulated RAFM steel substrates, which are being tested separately under thermal cycling, high-temperature moist air oxidation as well as molten Pb-Li capsule corrosion conditions. Results from this testing on coated samples, and comparisons with bare RAFM reference samples will be presented and conclusions will be presented assessing the viability of the new ceramic coatings to be viable corrosion prevention systems for DCLL breeders in commercial nuclear fusion reactors.Keywords: breeding blanket, corrosion protection, coating, plasma spray
Procedia PDF Downloads 30987 Microfungi on Sandy Beaches: Potential Threats for People Enjoying Lakeside Recreation
Authors: Tomasz Balabanski, Anna Biedunkiewicz
Abstract:
Research on basic bacteriological and physicochemical parameters conducted by state institutions (Provincial Sanitary and Epidemiological Station and District Sanitary and Epidemiological Station) are limited to bathing waters under constant sanitary and epidemiological supervision. Unfortunately, no routine or monitoring tests are carried out for the presence of microfungi. This also applies to beach sand used for recreational purposes. The purpose of the planned own research was to determine the diversity of the mycobiota present on supervised and unsupervised sandy beaches, on the shores of lakes, of municipal baths used for recreation. The research material consisted of microfungi isolated from April to October 2019 from sandy beaches of supervised and unsupervised lakes located within the administrative boundaries of the city of Olsztyn (North-Eastern Poland, Europe). Four lakes, out of the fifteen available (Tyrsko, Kortowskie, Skanda, and Ukiel), whose bathing waters are subjected to routine bacteriological tests, were selected for testing. To compare the diversity of the mycobiota composition on the surface and below the sand mixing layer, samples were taken from two depths (10 cm and 50 cm), using a soil auger. Micro-fungi from sand samples were obtained by surface inoculation on an RBC medium from the 1st dilution (1:10). After incubation at 25°C for 96-144 h, the average number of CFU/dm³ was counted. Morphologically differing yeast colonies were passaged into Sabouraud agar slants with gentamicin and incubated again. For detailed laboratory analyses, culture methods (macro- and micro-cultures) and identification methods recommended in diagnostic mycological laboratories were used. The conducted research allowed obtaining 140 yeast isolates. The total average population ranged from 1.37 × 10⁻² CFU/dm³ before the bathing season (April 2019), 1.64 × 10⁻³ CFU/dm³ in the season (May-September 2019), and 1.60 × 10⁻² CFU/dm³ after the end of the season (October 2019). More microfungi were obtained from the surface layer of sand (100 isolates) than from the deeper layer (40 isolates). Reported microfungi may circulate seasonally between individual elements of the lake ecosystem. From the sand/soil from the catchment area beaches, they can get into bathing waters, stopping periodically on the coastal phyllosphere. The sand of the beaches and the phyllosphere are a kind of filter for the water reservoir. The presence of microfungi with various pathogenicity potential in these places is of major epidemiological importance. Therefore, full monitoring of not only recreational waters but also sandy beaches should be treated as an element of constant control by appropriate supervisory institutions, allowing recreational areas for public use so that the use of these places does not involve the risk of infection. Acknowledgment: 'Development Program of the University of Warmia and Mazury in Olsztyn', POWR.03.05.00-00-Z310/17, co-financed by the European Union under the European Social Fund from the Operational Program Knowledge Education Development. Tomasz Bałabański is a recipient of a scholarship from the Programme Interdisciplinary Doctoral Studies in Biology and Biotechnology (POWR.03.05.00-00-Z310/17), which is funded by the 'European Social Fund'.Keywords: beach, microfungi, sand, yeasts
Procedia PDF Downloads 10486 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 6885 Exploring the Neural Mechanisms of Communication and Cooperation in Children and Adults
Authors: Sara Mosteller, Larissa K. Samuelson, Sobanawartiny Wijeakumar, John P. Spencer
Abstract:
This study was designed to examine how humans are able to teach and learn semantic information as well as cooperate in order to jointly achieve sophisticated goals. Specifically, we are measuring individual differences in how these abilities develop from foundational building blocks in early childhood. The current study adopts a paradigm for novel noun learning developed by Samuelson, Smith, Perry, and Spencer (2011) to a hyperscanning paradigm [Cui, Bryant and Reiss, 2012]. This project measures coordinated brain activity between a parent and child using simultaneous functional near infrared spectroscopy (fNIRS) in pairs of 2.5, 3.5 and 4.5-year-old children and their parents. We are also separately testing pairs of adult friends. Children and parents, or adult friends, are seated across from one another at a table. The parent (in the developmental study) then teaches their child the names of novel toys. An experimenter then tests the child by presenting the objects in pairs and asking the child to retrieve one object by name. Children are asked to choose from both pairs of familiar objects and pairs of novel objects. In order to explore individual differences in cooperation with the same participants, each dyad plays a cooperative game of Jenga, in which their joint score is based on how many blocks they can remove from the tower as a team. A preliminary analysis of the noun-learning task showed that, when presented with 6 word-object mappings, children learned an average of 3 new words (50%) and that the number of objects learned by each child ranged from 2-4. Adults initially learned all of the new words but were variable in their later retention of the mappings, which ranged from 50-100%. We are currently examining differences in cooperative behavior during the Jenga playing game, including time spent discussing each move before it is made. Ongoing analyses are examining the social dynamics that might underlie the differences between words that were successfully learned and unlearned words for each dyad, as well as the developmental differences observed in the study. Additionally, the Jenga game is being used to better understand individual and developmental differences in social coordination during a cooperative task. At a behavioral level, the analysis maps periods of joint visual attention between participants during the word learning and the Jenga game, using head-mounted eye trackers to assess each participant’s first-person viewpoint during the session. We are also analyzing the coherence in brain activity between participants during novel word-learning and Jenga playing. The first hypothesis is that visual joint attention during the session will be positively correlated with both the number of words learned and with the number of blocks moved during Jenga before the tower falls. The next hypothesis is that successful communication of new words and success in the game will each be positively correlated with synchronized brain activity between the parent and child/the adult friends in cortical regions underlying social cognition, semantic processing, and visual processing. This study probes both the neural and behavioral mechanisms of learning and cooperation in a naturalistic, interactive and developmental context.Keywords: communication, cooperation, development, interaction, neuroscience
Procedia PDF Downloads 254