Search results for: biocontrol methods
12913 Digital System Design for Strategic Improvement Planning in Education: A Socio-Technical and Iterative Design Approach
Authors: Neeley Current, Fatih Demir, Kenneth Haggerty, Blake Naughton, Isa Jahnke
Abstract:
Educational systems seek reform using data-intensive continuous improvement processes known as strategic improvement plans (SIPs). Schools turn to digital systems to monitor, analyze and report SIPs. One technical challenge of these digital systems focuses on integrating a highly diverse set of data sources. Another challenge is to create a learnable sociotechnical system to help administrators, principals and teachers add, manipulate and interpret data. This study explores to what extent one particular system is usable and useful for strategic planning activities and whether intended users see the benefit of the system achieve the goal of improving workflow related to strategic planning in schools. In a three-phase study, researchers used sociotechnical design methods to understand the current workflow, technology use, and processes of teachers and principals surrounding their strategic improvement planning. Additionally, design review and task analysis usability methods were used to evaluate task completion, usability, and user satisfaction of the system. The resulting socio-technical models illustrate the existing work processes and indicate how and at which places in the workflow the newly developed system could have an impact. The results point to the potential of the system but also indicate that it was initially too complicated for use. However, the diverse users see the potential benefits, especially to overcome the diverse set of data sources, and that the system could fill a gap for schools in planning and conducting strategic improvement plans.Keywords: continuous improvement process, education reform, strategic improvement planning, sociotechnical design, software development, usability
Procedia PDF Downloads 29712912 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 12312911 Navigating a Changing Landscape: Opportunities for Research Managers
Authors: Samba Lamine Cisse, Cheick Oumar Tangara, Seynabou Sissoko, Mahamadou Diakite, Seydou Doumbia
Abstract:
Introduction: Over the past two decades, the world has been constantly changing, with new trends in project management. These trends are transforming the methods and priorities of research project management. They include the rise of digital technologies, multidisciplinary, open science, and the pressure for high-impact results. Managers, therefore, find themselves at a crossroads between the challenges and opportunities offered by these new trends. This paper aims to identify the challenges and opportunities they face while proposing strategies for effectively navigating this dynamic context. Methodology: This is a qualitative study based on an analysis of the challenges and opportunities facing the University Clinical Research Center in terms of new technologies and project management methods. This blended approach provides an overview of emerging trends and practices. Results: This article shows how research managers can turn new research trends in their favor and how they can adapt to the changes they face to optimize the productivity of research teams while ensuring the quality and ethics of the work. It also explores the importance of developing skills in data management, international collaboration, and innovation management. Finally, it proposes strategies for responding effectively to the challenges posed by these new trends while strengthening the position of research managers as essential facilitators of scientific progress. Conclusion: Navigating this changing landscape requires research managers to be highly flexible and able to anticipate the realities of their institution. By adopting modern project management methodologies and cultivating a culture of innovation, they can turn challenges into opportunities and propel research toward new horizons. This paper provides a strategic framework for overcoming current obstacles and capitalizing on future developments in research.Keywords: new trends, research management, opportunities, challenges
Procedia PDF Downloads 1112910 Knowledge of Trauma-Informed Practice: A Mixed Methods Exploratory Study with Educators of Young Children
Authors: N. Khodarahmi, L. Ford
Abstract:
Decades of research on the impact of trauma in early childhood suggest severe risks to the mental health, emotional, social and physical development of a young child. Trauma-exposed students can pose a variety of different levels of challenges to schools and educators of young children and to date, few studies have addressed ECE teachers’ role in providing trauma support. The present study aims to contribute to this literature by exploring the beliefs of British Columbia’s (BC) early childhood education (ECE) teachers in their level of readiness and capability to work within a trauma-informed practice (TIP) framework to support their trauma-exposed students. Through a sequential, mix-methods approach, a self-report questionnaire and semi-structured interviews will be used to gauge BC ECE teachers’ knowledge of TIP, their preparedness, and their ability in using this framework to support their most vulnerable students. Teacher participants will be recruited through the ECEBC organization and various school districts in the Greater Vancouver Area. Questionnaire data will be primarily collected through an online survey tool whereas interviews will be taking place in-person and audio-recorded. Data analysis of survey responses will be largely descriptive, whereas interviews, once transcribed, will be employing thematic content analysis to generate themes from teacher responses. Ultimately, this study hopes to highlight the necessity of utilizing the TIP framework in BC ECE classrooms in order to support both trauma-exposed students and provide essential resources to compassionate educators of young children.Keywords: early childhood education, early learning classrooms, refugee students, trauma-exposed students, trauma-informed practice
Procedia PDF Downloads 14112909 Optimizing Nitrogen Fertilizer Application in Rice Cultivation: A Decision Model for Top and Ear Dressing Dosages
Authors: Ya-Li Tsai
Abstract:
Nitrogen is a vital element crucial for crop growth, significantly influencing crop yield. In rice cultivation, farmers often apply substantial nitrogen fertilizer to maximize yields. However, excessive nitrogen application increases the risk of lodging and pest infestation, leading to yield losses. Additionally, conventional flooded irrigation methods consume significant water resources, necessitating precise agricultural and intelligent water management systems. In this study, it leveraged physiological data and field images captured by unmanned aerial vehicles, considering fertilizer treatment and irrigation as key factors. Statistical models incorporating rice physiological data, yield, and vegetation indices from image data were developed. Missing physiological data were addressed using multiple imputation and regression methods, and regression models were established using principal component analysis and stepwise regression. Target nitrogen accumulation at key growth stages was identified to optimize fertilizer application, with the difference between actual and target nitrogen accumulation guiding recommendations for ear dressing dosage. Field experiments conducted in 2022 validated the recommended ear dressing dosage, demonstrating no significant difference in final yield compared to traditional fertilizer levels under alternate wetting and drying irrigation. These findings highlight the efficacy of applying recommended dosages based on fertilizer decision models, offering the potential for reduced fertilizer use while maintaining yield in rice cultivation.Keywords: intelligent fertilizer management, nitrogen top and ear dressing fertilizer, rice, yield optimization
Procedia PDF Downloads 8212908 Epidemiological profile of Tuberculosis Disease in Meknes, Morocco. Descriptive analysis, 2016-2020
Authors: Authors: A. Lakhal, M. Bahalou, A. Khattabi
Abstract:
Introduction: Tuberculosis is one of the world's deadliest infectious diseases. In Morocco, a total of 30,636 cases of Tuberculosis, all forms combined, were reported in 2015, representing an incidence of 89 cases per 100,000 population. The number of deaths from tuberculosis (TB) was 656 cases. In the prefecture of Meknes, its incidence remains high compared to the national level. The objective of this work is to describe the epidemiological profile of tuberculosis in the prefecture of Meknes. Methods: It is a descriptive analysis of TB cases reported between 2016 and 2020 at the regional diagnostic center of tuberculosis and respiratory diseases. We performed analysis by using Microsoft Excel and EpiInfo 7. Results: Epidemiological data from 2016 to 2020 report a total of 4100 new cases of all forms of tuberculosis, with an average of 820 new cases per year. The median age is 32 years. There is a clear male predominance, on average 58% of cases are male and 42% female. The incidence rate of bacteriologically confirmed tuberculosis per 100,000 inhabitants has increased from 35 cases per 100,000 inhabitants in 2016 to 39.4 cases per 100,000 inhabitants in 2020. The confirmation rate for pulmonary tuberculosis decreased from 84% in 2016 to 75% in 2020. Pulmonary involvement predominates by an average of 46%, followed by lymph node involvement 29%and pleural involvement by an average of 10%. Digestive, osteoarticular, genitourinary, and meningeal involvement occurs in 8% of cases. Primary tuberculosis infection occurs in an average of 0.5% of cases. The proportion of HIV-TB co-infections was 2.8 in 2020. Conclusion: The incidence of tuberculosis in Meknes remains high compared to the national level. Thus, it is imperative to reinforce the earlier detection; improve the contact tracing, detection methods of cases for their confirmation and treatment, and to reduce the proportion of the lost to follow up as well.Keywords: tuberculosis, epidemiological profile, meknes, morocco
Procedia PDF Downloads 15712907 Chemiluminescent Detection of Microorganisms in Food/Drug Product Using Reducing Agents and Gold Nanoplates
Authors: Minh-Phuong Ngoc Bui, Abdennour Abbas
Abstract:
Microbial spoilage of food/drug has been a constant nuisance and an unavoidable problem throughout history that affects food/drug quality and safety in a variety of ways. A simple and rapid test of fungi and bacteria in food/drugs and environmental clinical samples is essential for proper management of contamination. A number of different techniques have been developed for detection and enumeration of foodborne microorganism including plate counting, enzyme-linked immunosorbent assay (ELISA), polymer chain reaction (PCR), nucleic acid sensor, electrical and microscopy methods. However, the significant drawbacks of these techniques are highly demand of operation skills and the time and cost involved. In this report, we introduce a rapid method for detection of bacteria and fungi in food/drug products using a specific interaction between a reducing agent (tris(2-carboxylethyl)phosphine (TCEP)) and the microbial surface proteins. The chemical reaction was transferred to a transduction system using gold nanoplates-enhanced chemiluminescence. We have optimized our nanoplates synthetic conditions, characterized the chemiluminescence parameters and optimized conditions for the microbial assay. The new detection method was applied for rapid detection of bacteria (E.coli sp. and Lactobacillus sp.) and fungi (Mucor sp.), with limit of detection as low as single digit cells per mL within 10 min using a portable luminometer. We expect our simple and rapid detection method to be a powerful alternative to the conventional plate counting and immunoassay methods for rapid screening of microorganisms in food/drug products.Keywords: microorganism testing, gold nanoplates, chemiluminescence, reducing agents, luminol
Procedia PDF Downloads 29912906 The Use of Microbiological Methods to Reduce Aflatoxin M1 in Cheese
Authors: Bruna Goncalves, Jennifer Henck, Romulo Uliana, Eliana Kamimura, Carlos Oliveira, Carlos Corassin
Abstract:
Studies have shown evidence of human exposure to aflatoxin M1 due to the consumption of contaminated milk and dairy products (mainly cheeses). This poses a great risk to public health, since milk and milk products are frequently consumed by a portion of the population considered immunosuppressed, children and the elderly. Knowledge of the negative impacts of aflatoxins on health and economics has led to investigations of strategies to prevent their formation in food, as well as to eliminate, inactivate or reduce the bioavailability of these toxins in contaminated products This study evaluated the effect of microbiological methods using lactic acid bacteria on aflatoxin M1 (AFM1) reduction in Minas Frescal cheese (typical Brazilian product, being among the most consumed cheeses in Brazil) spiked with 1 µg/L AFM1. Inactivated lactic acid bacteria (0,5%, v/v de L. rhamnosus e L. lactis) were added during the cheese production process. Nine cheeses were produced, divided into three treatments: negative controls (without AFM1 or lactic acid bacteria), positive controls (AFM1 only), and lactic acid bacteria + AFM1. Samples of cheese were collected on days 2, 10, 20 and 30 after the date of production and submitted to composition analyses and determination of AFM1 by high-performance liquid chromatography. The reductions of AFM1 in cheese by lactic acid bacteria at the end of the trial indicate a potential application of inactivated lactic acid bacteria in reducing the bioavailability of AFM1 in Minas frescal cheese without physical-chemical and microbiological modifications during the 30-day experimental period. The authors would like to thank São Paulo Research Foundation – FAPESP (grants #2017/20081-6 and #2017/19683-1).Keywords: aflatoxin, milk, minas frescal cheese, decontamination
Procedia PDF Downloads 19412905 Soil-Less Misting System: A Technology for Hybrid Seed Production in Tomato (Lycopersicon esculentum Mill.).
Authors: K. D. Rajatha, S. Rajendra Prasad, N. Nethra
Abstract:
Aeroponics is one of the advanced techniques to cultivate plants without soil with minimal water and nutrient consumption. This is the technology which could bring the vertical growth in agriculture. It is an eco-friendly approach widely used for commercial cultivation of vegetables to obtain the supreme quality and yield. In this context, to harvest potentiality of the technology, an experiment was designed to evaluate the suitability of the aeroponics method over the conventional method for hybrid seed production of tomato. The experiment was carried out under Completely Randomized Design with Factorial (FCRD) concept with three replications during the year 2017-18 at UAS, GKVK Bengaluru. Nutrients and pH were standardized; among the six different nutrient solutions, the crop performance was better in Hoagland’s solution with pH between 5.5-7. The results of the present study revealed that between TAG1F and TAG2F parental lines, TAG1F performed better in both the methods of seed production. Among the methods, aeroponics showed better performance for the quality parameters except for plant spread, due to better availability of nutrients and aeration, huge root biomass in aeroponics. Aeroponics method showed significantly higher plant length (124.9 cm), plant growth rate (0.669), seedling survival rate (100%), early flowering (27.5 days), highest fruit weight (121.5 g), 100 seed weight (0.373 g) and total seed yield plant⁻¹ (11.68 g) compared to the conventional method. By providing the best environment for plant growth, the genetically best possible plant could be grown, thus complete potentiality of the plant could be harvested. Hence, aeroponics could be a promising tool for quality and healthy hybrid seed production throughout the year within protected cultivation.Keywords: aeroponics, Hoagland’s solution, hybrid seed production, Lycopersicon esculentum
Procedia PDF Downloads 10212904 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 50412903 Elite Child Athletes Are Our Future: Cardiac Adaptation to Monofin Training in Prepubertal Egyptian Athletes
Authors: Magdy Abouzeid, Nancy Abouzeid, Afaf Salem
Abstract:
Background: The elite child athletes are one who has superior athletic talent. Monofin (a single surface swim fin) swimming already proved to be the most efficient method of swimming for human being. This is a novel descriptive study examining myocardial function indices in prepubertal monofin children. The aim of the present study was to determine the influence of long-term monofin training (LTMT), 36 weeks, 6 times per week, 90 min per unit on Myocardial function adaptation in elite child monofin athletes. Methods: 14 elite monofin children aged 11.95 years (± 1.09 yr) took part for (LTMT). All subjects underwent two-dimension, M-mode, and Doppler echocardiography before and after training to evaluate cardiac dimensions and function; septal and posterior wall thickness. Statistical methods of SPSS, means ± SD and paired t test, % of improvement were used. Findings: There was significant difference (p<0.01) and % improvement for all echocardiography parameter after (LTMT). Inter ventricular septal thickness in diastole and in systole increased by 27.9 % and 42.75 %. Left ventricular end systolic dimension and diastole increased by 16.81 % and 42.7 % respectively. Posterior wall thickness in systole very highly increased by 283.3 % and in diastole increased by 51.78 %. Left ventricular mass in diastole and in systole increased by 44.8 % and 40.1 % respectively. Stroke volume (SV) and resting heart rate (HR) significant changed (sv) 25 %, (HR) 14.7 %. Interpretation: the unique swim fin tool and create propulsion and overcome resistance. Further researches are needed to determine the effects of monofin training on right ventricular in child athletes.Keywords: prepubertal, monofin training, heart athlete's, elite child athlete, echocardiography
Procedia PDF Downloads 33912902 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 16412901 Effect of In-Season Linear Sprint Training on Sprint Kinematics of Amateur Soccer Players
Authors: Avinash Kharel
Abstract:
Background: - Linear sprint training is one possible approach to developing sprint performance, a crucial skill to focus on in soccer. Numerous methods, including various on-field training options, can be employed to attain this goal. However, the effect of In-season linear sprint training on sprint performance and related kinetics changes are unknown in a professional setting. The study aimed to investigate the effect of in-season linear sprint training on the sprint kinematics of amateur soccer players. Methods: - After familiarization, a 4-week training protocol was completed with sprint performance and Force Velocity (FV) profiles was compared before and after the training. Eighteen amateur soccer male players (Age 22 ± 2 years: Height: 178 ± 7cm; body-mass: 74 ± 8 Kg, 30-m split-time: 4.398 ± s) participated in the study. Sprint kinematics variables, including maximum Sprint Velocity (V0), Theoretical Maximum Force (F0), Maximum Force Output per kilogram of body weight (N/KG), Maximum Velocity (V(0)), Maximum Power Output (P MAX (W)), Ratio of Force to Velocity (FV), and Ratio of Force to Velocity at Peak power were measured. Results: - Results showed significant improvements in Maximum Sprint Velocity (p<0.01, ES=0.89), Theoretical Maximum Force (p<0.05, ES=0.50), Maximum Force Output per kilogram of body weight (p<0.05, ES=0.42), Maximum Power Output (p<0.05, ES=0.52), and Ratio of Force to Velocity at Peak Power (RF PEAK) (p<0.05, ES=0.44) post-training. There were no significant changes in the ratio of Force to Velocity (FV) and Maximum Velocity V (0) post-training (p>0.05). Conclusion: - These findings suggest that In-season linear sprint training can effectively improve certain sprint kinematics variables in amateur soccer players. Coaches and players should consider incorporating linear sprint training into their in-season training programs to improve sprint performance.Keywords: sprint performance, training intervention, soccer, kinematics
Procedia PDF Downloads 7312900 Application of Non-Smoking Areas in Hospitals
Authors: Nur Inayah Ismaniar, Sukri Palutturi, Ansariadi, Atjo Wahyu
Abstract:
Background: In various countries in the world, the problem of smoking is now considered something serious because of the effects of smoking which can not only lead to addiction but also have the potential to harm health. Public health authorities have concluded that one solution that can be done to protect the public from active smokers is to issue a policy that requires public facilities to be completely smoke-free. The hospital is one of the public facilities that has been designated as a smoke-free area. However, the implementation and maintenance of a successful program based on a smoke-free hospital are still considered an ongoing challenge worldwide due to the very low level of adherence. The low level of compliance with this smoke-free policy is also seen in other public facilities. The purpose of the literature review is to review the level of compliance with the application of the Non-Smoking Area policy, how this policy has succeeded in reducing smoking activity in hospitals, and what factors lead to such compliance in each country in the world. Methods: A literature review of articles was carried out on all types of research methods, both qualitative and quantitative. The sample is all subjects who are in the research location, which includes patients, staff and hospital visitors. Results: Various variations in the level of compliance were found in various kinds of literature. The literature with the highest level of compliance is 88.4%. Furthermore, several determinants that are known to affect the compliance of the Non-Smoking Area policies in hospitals include communication, information, knowledge, perceptions, interventions, attitudes and support. Obstacles to its enforcement are the absence of sanctions against violators of the Non-Smoking Area policy, the ineffectiveness of the function of policymakers in hospitals, and negative perceptions of smoking related to mental health. Conclusion: Violations of the Non-Smoking Area policy are often committed by the hospital staff themselves, which makes it difficult for this policy to be fully enforced at various points in the hospital.Keywords: health policy, non-smoking area, hospital, implementation
Procedia PDF Downloads 8912899 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 26712898 Exploring the Relationship Between Helicobacter Pylori Infection and the Incidence of Bronchogenic Carcinoma
Authors: Jose R. Garcia, Lexi Frankel, Amalia Ardeljan, Sergio Medina, Ali Yasback, Omar Rashid
Abstract:
Background: Helicobacter pylori (H. pylori) is a gram-negative, spiral-shaped bacterium that affects nearly half of the population worldwide and humans serve as the principal reservoir. Infection rates usually follow an inverse relationship with hygiene practices and are higher in developing countries than developed countries. Incidence varies significantly by geographic area, race, ethnicity, age, and socioeconomic status. H. pylori is primarily associated with conditions of the gastrointestinal tract such as atrophic gastritis and duodenal peptic ulcers. Infection is also associated with an increased risk of carcinogenesis as there is evidence to show that H. pylori infection may lead to gastric adenocarcinoma and mucosa-associated lymphoid tissue (MALT) lymphoma. It is suggested that H. pylori infection may be considered as a systemic condition, leading to various novel associations with several different neoplasms such as colorectal cancer, pancreatic cancer, and lung cancer, although further research is needed. Emerging evidence suggests that H. pylori infection may offer protective effects against Mycobacterium tuberculosis as a result of non-specific induction of interferon- γ (IFN- γ). Similar methods of enhanced immunity may affect the development of bronchogenic carcinoma due to the antiproliferative, pro-apoptotic and cytostatic functions of IFN- γ. The purpose of this study was to evaluate the correlation between Helicobacter pylori infection and the incidence of bronchogenic carcinoma. Methods: The data was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to evaluate the patients infected versus patients not infected with H. pylori using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for the purpose of academic research. Standard statistical methods were used. Results:-Between January 2010 and December 2019, the query was analyzed and resulted in 163,224 in both the infected and control group, respectively. The two groups were matched by age range and CCI score. The incidence of bronchogenic carcinoma was 1.853% with 3,024 patients in the H. pylori group compared to 4.785% with 7,810 patients in the control group. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.367 (0.353 - 0.383) with a confidence interval of 95%. The two groups were matched by treatment and incidence of cancer, which resulted in a total of 101,739 patients analyzed after this match. The incidence of bronchogenic carcinoma was 1.929% with 1,962 patients in the H. pylori and treatment group compared to 4.618% with 4,698 patients in the control group with treatment. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.403 (0.383 - 0.425) with a confidence interval of 95%.Keywords: bronchogenic carcinoma, helicobacter pylori, lung cancer, pathogen-associated molecular patterns
Procedia PDF Downloads 18312897 Laboratory Simulation of Subway Dynamic Stray Current Interference with Cathodically Protected Structures
Authors: Mohammad Derakhshani, Saeed Reza Allahkaram, Michael Isakani-Zakaria, Masoud Samadian, Hojat Sharifi Rasaey
Abstract:
Dynamic stray currents tend to change their magnitude and polarity with time at their source which will create anodic and cathodic spots on a nearby interfered structure. To date, one of the biggest known dynamic stray current sources are DC traction systems. Laboratory simulation is a suitable method to apply theoretical principles in order to identify effective parameters in dynamic stray current influenced corrosion. Simulation techniques can be utilized for various mitigation methods applied in a small scales for selection of the most efficient method with regards to field applications. In this research, laboratory simulation of potential fluctuations caused by dynamic stray current on a cathodically protected structure was investigated. A lab model capable of generating DC static and dynamic stray currents and simulating its effects on cathodically protected samples were developed based on stray current induced (contact-less) polarization technique. Stray current pick-up and discharge spots on an influenced structure were simulated by inducing fluctuations in the sample’s stationary potential. Two mitigation methods for dynamic stray current interference on buried structures namely application of sacrificial anodes as preferred discharge point for the stray current and potentially controlled cathodic protection was investigated. Results showed that the application of sacrificial anodes can be effective in reducing interference only in discharge spot. But cathodic protection through potential controlling is more suitable for mitigating dynamic stray current effects.Keywords: simulation, dynamic stray current, fluctuating potentials, sacrificial anode
Procedia PDF Downloads 30012896 Developing a Quality Mentor Program: Creating Positive Change for Students in Enabling Programs
Authors: Bianca Price, Jennifer Stokes
Abstract:
Academic and social support systems are critical for students in enabling education; these support systems have the potential to enhance the student experience whilst also serving a vital role for student retention. In the context of international moves toward widening university participation, Australia has developed enabling programs designed to support underrepresented students to access to higher education. The purpose of this study is to examine the effectiveness of a mentor program based within an enabling course. This study evaluates how the mentor program supports new students to develop social networks, improve retention, and increase satisfaction with the student experience. Guided by Social Learning Theory (SLT), this study highlights the benefits that can be achieved when students engage in peer-to-peer based mentoring for both social and learning support. Whilst traditional peer mentoring programs are heavily based on face-to-face contact, the present study explores the difference between mentors who provide face-to-face mentoring, in comparison with mentoring that takes place through the virtual space, specifically via a virtual community in the shape of a Facebook group. This paper explores the differences between these two methods of mentoring within an enabling program. The first method involves traditional face-to-face mentoring that is provided by alumni students who willingly return to the learning community to provide social support and guidance for new students. The second method requires alumni mentor students to voluntarily join a Facebook group that is specifically designed for enabling students. Using this virtual space, alumni students provide advice, support and social commentary on how to be successful within an enabling program. Whilst vastly different methods, both of these mentoring approaches provide students with the support tools needed to enhance their student experience and improve transition into University. To evaluate the impact of each mode, this study uses mixed methods including a focus group with mentors, in-depth interviews, as well as engaging in netnography of the Facebook group ‘Wall’. Netnography is an innovative qualitative research method used to interpret information that is available online to better understand and identify the needs and influences that affect the users of the online space. Through examining the data, this research will reflect upon best practice for engaging students in enabling programs. Findings support the applicability of having both face-to-face and online mentoring available for students to assist enabling students to make a positive transition into University undergraduate studies.Keywords: enabling education, mentoring, netnography, social learning theory
Procedia PDF Downloads 12112895 Exploring the Discrepancy: The Influence of Instagram in Shaping Idealized Lifestyles and Self-Perceptions Among Indian University Students
Authors: Dhriti Kirpalani
Abstract:
The survey aims to explore the impact of Instagram on the perception of lifestyle aspirations (such as social life, fitness, trends followed in fashion, etc.) and perception of self in relation to an idealized lifestyle: Amidst today's media-saturated environment, university students are constantly exposed to idealized portrayals of lifestyles, often leading to unrealistic expectations and dissatisfaction with their own lives. This study investigates the impact of media on university students' perceptions of their own lifestyle, the discrepancy between their self-perception and idealized lifestyle, and their mental health. Employing a mixed-methods approach, the study combines quantitative and qualitative data collection methods to understand the issue comprehensively. A literature review was conducted in order to determine the effects of idealized lifestyle portrayal on Instagram; however, less attention has been received in the Indian setting. The researchers wish to employ a convenience sampling method among undergraduate students from India. The surveys that would be employed for quantitative analysis are Negative Social Media Comparison (NSMCS), Lifestyle Satisfaction Scale (LSS), Psychological Well-being Scale (PWB), and Self-Perception Profile for Adolescents (SPPA). The qualitative aspect would include in-depth interviews to provide deeper insights into participants' experiences and the mechanisms by which media influences their lifestyle aspirations and mental health. With the aim of being an exploratory study, the basis of the idea is found in the social comparison theory described by Leon Festinger. The findings aim to inform interventions to promote realistic expectations about lifestyle, reduce the negative effects of media on university students, and improve their mental health and well-being.Keywords: declined self-perception, idealized lifestyle, Instagram, Indian university students, social comparison
Procedia PDF Downloads 3912894 Simulation-Based Evaluation of Indoor Air Quality and Comfort Control in Non-Residential Buildings
Authors: Torsten Schwan, Rene Unger
Abstract:
Simulation of thermal and electrical building performance more and more becomes part of an integrative planning process. Increasing requirements on energy efficiency, the integration of volatile renewable energy, smart control and storage management often cause tremendous challenges for building engineers and architects. This mainly affects commercial or non-residential buildings. Their energy consumption characteristics significantly distinguish from residential ones. This work focuses on the many-objective optimization problem indoor air quality and comfort, especially in non-residential buildings. Based on a brief description of intermediate dependencies between different requirements on indoor air treatment it extends existing Modelica-based building physics models with additional system states to adequately represent indoor air conditions. Interfaces to corresponding HVAC (heating, ventilation, and air conditioning) system and control models enable closed-loop analyzes of occupants' requirements and energy efficiency as well as profitableness aspects. A complex application scenario of a nearly-zero-energy school building shows advantages of presented evaluation process for engineers and architects. This way, clear identification of air quality requirements in individual rooms together with realistic model-based description of occupants' behavior helps to optimize HVAC system already in early design stages. Building planning processes can be highly improved and accelerated by increasing integration of advanced simulation methods. Those methods mainly provide suitable answers on engineers' and architects' questions regarding more exuberant and complex variety of suitable energy supply solutions.Keywords: indoor air quality, dynamic simulation, energy efficient control, non-residential buildings
Procedia PDF Downloads 23212893 Risks in the Islamic Banking Model and Methods Adopted to Manage Them
Authors: K. P. Fasalu Rahman
Abstract:
The financial services industry of Islam include large number of institutions, such as investment banks and commercial banks, investment companies and mutual insurance companies. All types of these financial institutions should have to deal with many issues and risks in their field of work. Islamic banks should expect to face two types of risks: risks that are similar to those faced by conventional financial intermediaries and risks that are unique to the Islamic Banks due to their compliance with the Shariah. The use of financial services and products that comply with the Shariah principles cause special issues for supervision and risk management. Risks are uncertain future events that could influence the achievement of the bank’s objectives, including strategic, operational, financial and compliance objectives. In Islamic banks, effective risk management deserves special attention. As an operational problem, risk management is the classification and identification of methods, processes, and risks in banks to supervise, monitor and measure them. In comparison to conventional banks, Islamic banks face big difficulties in identifying and managing risks due to bigger complexities emerging from the profit loss sharing (PLS) concept and nature of particular risks of Islamic financing. As the developing of managing risks tool becomes very essential, especially in Islamic banking as most of the products are depending on PLS principle, identifying and measuring each type of risk is highly important and critical in any Islamic finance based systems. This paper highlights the special and general risks surrounding Islamic banking. And it investigates in detail the need for risk management in Islamic banks. In addition to analyzing the effectiveness of risk management strategies adopted by Islamic financial institutions at present, this research is also suggesting strategies for improving risk management process of Islamic banks in future.Keywords: Islamic banking, management, risk, risk management
Procedia PDF Downloads 14012892 Competency Model as a Key Tool for Managing People in Organizations: Presentation of a Model
Authors: Andrea ČopíKová
Abstract:
Competency Based Management is a new approach to management, which solves organization’s challenges with complexity and with the aim to find and solve organization’s problems and learn how to avoid these in future. They teach the organizations to create, apart from the state of stability – that is temporary, vital organization, which is permanently able to utilize and profit from internal and external opportunities. The aim of this paper is to propose a process of competency model design, based on which a competency model for a financial department manager in a production company will be created. Competency models are very useful tool in many personnel processes in any organization. They are used for acquiring and selection of employees, designing training and development activities, employees’ evaluation, and they can be used as a guide for a career planning and as a tool for succession planning especially for managerial positions. When creating a competency model the method AHP (Analytic Hierarchy Process) and quantitative pair-wise comparison (Saaty’s method) will be used; these methods belong among the most used methods for the determination of weights, and it is used in the AHP procedure. The introduction part of the paper consists of the research results pertaining to the use of competency model in practice and then the issue of competency and competency models is explained. The application part describes in detail proposed methodology for the creation of competency models, based on which the competency model for the position of financial department manager in a foreign manufacturing company, will be created. In the conclusion of the paper, the final competency model will be shown for above mentioned position. The competency model divides selected competencies into three groups that are managerial, interpersonal and functional. The model describes in detail individual levels of competencies, their target value (required level) and the level of importance.Keywords: analytic hierarchy process, competency, competency model, quantitative pairwise comparison
Procedia PDF Downloads 24412891 Requests and Responses to Requests in Jordanian Arabic
Authors: Raghad Abu Salma, Beatrice Szczepek Reed
Abstract:
Politeness is one of the most researched areas in pragmatics as it is key to interpersonal interactional phenomena. Many studies, particularly in linguistics, have focused on developing politeness theories and exploring linguistic devices used in communication to construct and establish social norms. However, the question of what constitutes polite language remains a point of ongoing debate. Prior research primarily examined politeness in English and its native speaking communities, oversimplifying the notion of politeness and associating it with surface-level language use. There is also a dearth of literature on politeness in Arabic, particularly in the context of Jordanian Arabic. Prior research investigating politeness in Arabic make generalized claims about politeness in Arabic without taking the linguistic variations into account or providing empirical evidence. This proposed research aims to explore how Jordanian Arabic influences its first language users in making and responding to requests, exploring participants' perceptions of politeness and the linguistic choices they make in their interactions. The study focuses on Jordanian expats living in London, UK providing an intercultural perspective that prior research does not consider. This study employs a mixed-methods approach combining discourse completion tasks (DCTs) with semi-structured interviews. While DCTs provide insight into participants’ linguistic choices, semi-structured interviews glean insight into participants' perceptions of politeness and their linguistic choices impacted by cultural norms and diverse experiences. This paper discusses previous research on politeness in Arabic, identifies research gaps, and discusses different methods for data collection. This paper also presents preliminary findings from the ongoing study.Keywords: politeness, pragmatics, jordanian arabic, intercultural politeness
Procedia PDF Downloads 7912890 Characterization of the Microorganisms Associated with Pleurotus ostractus and Pleurotus tuber-Regium Spent Mushroom Substrate
Authors: Samuel E. Okere, Anthony E. Ataga
Abstract:
Introduction: The microbial ecology of Pleurotus osteratus and Pleurotus tuber–regium spent mushroom substrate (SMS) were characterized to determine other ways of its utilization. Materials and Methods: The microbiological properties of the spent mushroom substrate were determined using standard methods. This study was carried out at the Microbiology Laboratory University of Port Harcourt, Rivers State, Nigeria. Results: Quantitative microbiological analysis revealed that Pleurotus osteratus spent mushroom substrate (POSMS) contained 7.9x10⁵ and 1.2 x10³ cfu/g of total heterotrophic bacteria and total fungi count respectively while Pleurotus tuber-regium spent mushroom substrate (PTSMS) contained 1.38x10⁶ and 9.0 x10² cfu/g of total heterotrophic bacteria count and total fungi count respectively. The fungi species encountered from Pleurotus tuber-regium spent mushroom substrate (PTSMS) include Aspergillus and Cladosporum species, while Aspergillus and Penicillium species were encountered from Pleurotus osteratus spent mushroom substrate (POSMS). However, the bacteria species encountered from Pleurotus tuber-regium spent mushroom substrate include Bacillus, Acinetobacter, Alcaligenes, Actinobacter, and Pseudomonas species while Bacillus, Actinobacteria, Aeromonas, Lactobacillus and Aerococcus species were encountered from Pleurotus osteratus spent mushroom substrate (POSMS). Conclusion: Therefore based on the findings from this study, it can be concluded that spent mushroom substrate contain microorganisms that can be utilized both in bioremediation of oil-polluted soils as they contain important hydrocarbon utilizing microorganisms such as Penicillium, Aspergillus and Bacillus species and also as sources of plant growth-promoting rhizobacteria (PGPR) such as Pseudomonas and Bacillus species which can induce resistance on plants. However, further studies are recommended, especially to molecularly characterize these microorganisms.Keywords: characterization, microorganisms, mushroom, spent substrate
Procedia PDF Downloads 16112889 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Authors: Bassam Istanbouli
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.Keywords: blueprint, ERP, modular, normalized
Procedia PDF Downloads 13912888 Pregnant Women and Mothers in Prison, Mother and Baby Units and Mental Health
Authors: Rachel Dolan
Abstract:
Background: Over two thirds of women in prison in England are mothers, and estimates suggest between 100 and 200 women per year give birth during imprisonment. There are currently six mother and baby units (MBUs) in prisons in England which admit women and babies up to the age of 18 months. Although there are only 65 places available, and despite positive impacts, they are rarely full. Mental illness may influence the number of admissions, as may interpretation of admission criteria. They are the only current alternative to separation for imprisoned mothers and their babies. Aims: To identify the factors that affect the decision to apply for/be offered a place in a prison MBU; to measure the impact of a placement upon maternal mental health and wellbeing; To measure the Initial outcomes for mother and child. Methods: A mixed methods approach - 100 pregnant women in English prisons are currently being recruited from prisons in England. Quantitative measures will establish the prevalence of mental disorder, personality disorder, substance misuse and quality of life. Qualitative interviews will document the experiences of pregnancy and motherhood in prison. Results: Preliminary quantitative findings suggest the most prevalent mental disorders are anxiety and depression and approximately half the participants meet the criteria for one or more personality disorders. The majority of participants to date have been offered a place in a prison MBU, and those in a prison with an MBU prior to applying are more likely to be admitted. Those with a previous history of childcare issues, who are known to social services are less likely to be offered a place. Qualitative findings suggest that many women are often hungry and uncomfortable during pregnancy, many have feelings of guilt about having a child in prison and that feelings of anxiety and worry are exacerbated by lack of information.Keywords: mothers, prison, mother and baby units, mental health
Procedia PDF Downloads 28612887 Ointment of Rosella Flower Petals Extract (Hibiscus sabdariffa): Pharmaceutical Preparations Formulation Development of Herbs for Antibacterial S. aureus
Authors: Muslihatus Syarifah
Abstract:
Introduction: Rosella flower petals can be used as an antibacterial because it contains alkaloids, flavonoids, phenolics, and terpenoids) for the . Bacteria activity is S. aureus can cause skin infections and pengobatanya most appropriate use of topical preparations. Ointment is a topical preparation comprising the active substance and ointment base. Not all the base matches the active substances or any type of disease. In this study using flavonoid active substances contained in rosella flower petals (Hibiscus sabdariffa) to be made ointment by testing a variety of different bases in order to obtain a suitable basis for the formulation of ointment extract rosella flower petals. Methods: Experimental research with research methods Post test control group design using the ointment is hydrocarbon sample, absorption, leached water and dissolved water. Then tested for bacteria S. aureus with different concentrations of 1%, 2%, 4%, 8%, 16, 32%. Data were analyzed using One Way ANOVA followed by Post Hoc test. Results: Ointment with a hydrocarbon base, absorption, leached water and dissolved water having no change in physical properties during storage. Base affect the physical properties of an ointment that adhesion, dispersive power and pH. The physical properties of the ointment with different concentrations produce different physical properties including adhesion, dispersive power and pH. The higher the concentration the higher dispersive power, but the smaller the adhesion and pH. Conclusion: Differences bases, storage time, the concentration of the extract can affect the physical properties of the ointment. Concentration of extract in the ointment extract rosella flower petals is 32%.Keywords: rosella, physical properties, ointments, antibacterial
Procedia PDF Downloads 37112886 Medical Workforce Knowledge of Adrenaline (Epinephrine) Administration in Anaphylaxis in Adults Considerably Improved with Training in an UK Hospital from 2010 to 2017
Authors: Jan C. Droste, Justine Burns, Nithin Narayan
Abstract:
Introduction: Life-threatening detrimental effects of inappropriate adrenaline (epinephrine) administration, e.g., by giving the wrong dose, in the context of anaphylaxis management is well documented in the medical literature. Half of the fatal anaphylactic reactions in the UK are iatrogenic, and the median time to a cardio-respiratory arrest can be as short as 5 minutes. It is therefore imperative that hospital doctors of all grades have active and accurate knowledge of the correct route, site, and dosage of administration of adrenaline. Given this time constraint and the potential fatal outcome with inappropriate management of anaphylaxis, it is alarming that surveys over the last 15 years have repeatedly shown only a minority of doctors to have accurate knowledge of adrenaline administration as recommended by the UK Resuscitation Council guidelines (2008 updated 2012). This comparison of survey results of the medical workforce over several years in a small NHS District General Hospital was conducted in order to establish the effect of the employment of multiple educational methods regarding adrenaline administration in anaphylaxis in adults. Methods: Between 2010 and 2017, several education methods and tools were used to repeatedly inform the medical workforce (doctors and advanced clinical practitioners) in a single district general hospital regarding the treatment of anaphylaxis in adults. Whilst the senior staff remained largely the same cohort, junior staff had changed fully in every survey. Examples included: (i) Formal teaching -in Grand Rounds; during the junior doctors’ induction process; advanced life support courses (ii) In-situ simulation training performed by the clinical skills simulation team –several ad hoc sessions and one 3-day event in 2017 visiting 16 separate clinical areas performing an acute anaphylaxis scenario using actors- around 100 individuals from multi-disciplinary teams were involved (iii) Hospital-wide distribution of the simulation event via the Trust’s Simulation Newsletter (iv) Laminated algorithms were attached to the 'crash trolleys' (v) A short email 'alert' was sent to all medical staff 3 weeks prior to the survey detailing the emergency treatment of anaphylaxis (vi) In addition, the performance of the surveys themselves represented a teaching opportunity when gaps in knowledge could be addressed. Face to face surveys were carried out in 2010 ('pre-intervention), 2015, and 2017, in the latter two occasions including advanced clinical practitioners (ACP). All surveys consisted of convenience samples. If verbal consent to conduct the survey was obtained, the medical practitioners' answers were recorded immediately on a data collection sheet. Results: There was a sustained improvement in the knowledge of the medical workforce from 2010 to 2017: Answers improved regarding correct drug by 11% (84%, 95%, and 95%); the correct route by 20% (76%, 90%, and 96%); correct site by 40% (43%, 83%, and 83%) and the correct dose by 45% (27%, 54%, and 72%). Overall, knowledge of all components -correct drug, route, site, and dose-improved from 13% in 2010 to 62% in 2017. Conclusion: This survey comparison shows knowledge of the medical workforce regarding adrenaline administration for treatment of anaphylaxis in adults can be considerably improved by employing a variety of educational methods.Keywords: adrenaline, anaphylaxis, epinephrine, medical education, patient safety
Procedia PDF Downloads 12812885 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 11412884 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company
Authors: Korpapa Srisamai, Pawee Siriruk
Abstract:
The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.Keywords: demand forecast, reorder point, lost sale, dead stock
Procedia PDF Downloads 121