Search results for: measurement
209 Translation and Validation of the Pain Resilience Scale in a French Population Suffering from Chronic Pain
Authors: Angeliki Gkiouzeli, Christine Rotonda, Elise Eby, Claire Touchet, Marie-Jo Brennstuhl, Cyril Tarquinio
Abstract:
Resilience is a psychological concept of possible relevance to the development and maintenance of chronic pain (CP). It refers to the ability of individuals to maintain reasonably healthy levels of physical and psychological functioning when exposed to an isolated and potentially highly disruptive event. Extensive research in recent years has supported the importance of this concept in the CP literature. Increased levels of resilience were associated with lower levels of perceived pain intensity and better mental health outcomes in adults with persistent pain. The ongoing project seeks to include the concept of pain-specific resilience in the French literature in order to provide more appropriate measures for assessing and understanding the complexities of CP in the near future. To the best of our knowledge, there is currently no validated version of the pain-specific resilience measure, the Pain Resilience scale (PRS), for French-speaking populations. Therefore, the present work aims to address this gap, firstly by performing a linguistic and cultural translation of the scale into French and secondly by studying the internal validity and reliability of the PRS for French CP populations. The forward-translation-back translation methodology was used to achieve as perfect a cultural and linguistic translation as possible according to the recommendations of the COSMIN (Consensus-based Standards for the selection of health Measurement Instruments) group, and an online survey is currently conducted among a representative sample of the French population suffering from CP. To date, the survey has involved one hundred respondents, with a total target of around three hundred participants at its completion. We further seek to study the metric properties of the French version of the PRS, ''L’Echelle de Résilience à la Douleur spécifique pour les Douleurs Chroniques'' (ERD-DC), in French patients suffering from CP, assessing the level of pain resilience in the context of CP. Finally, we will explore the relationship between the level of pain resilience in the context of CP and other variables of interest commonly assessed in pain research and treatment (i.e., general resilience, self-efficacy, pain catastrophising, and quality of life). This study will provide an overview of the methodology used to address our research objectives. We will also present for the first time the main findings and further discuss the validity of the scale in the field of CP research and pain management. We hope that this tool will provide a better understanding of how CP-specific resilience processes can influence the development and maintenance of this disease. This could ultimately result in better treatment strategies specifically tailored to individual needs, thus leading to reduced healthcare costs and improved patient well-being.Keywords: chronic pain, pain measure, pain resilience, questionnaire adaptation
Procedia PDF Downloads 90208 Influence of the Nature of Plants on Drainage, Purification Performance and Quality of Biosolids on Faecal Sludge Planted Drying Beds in Sub-Saharan Climate Conditions
Authors: El Hadji Mamadou Sonko, Mbaye Mbéguéré, Cheikh Diop, Linda Strande
Abstract:
In new approaches that are being developed for the treatment of sludge, the valorization of by-product is increasingly encouraged. In this perspective, Echinochloa pyramidalis has been successfully tested in Cameroon. Echinochloa pyramidalis is an efficient forage plant in the treatment of faecal sludge. It provides high removal rates and biosolids of high agronomic value. Thus in order to advise the use of this plant in planted drying beds in Senegal its comparison with the plants long been used in the field deserves to be carried out. That is the aim of this study showing the influence of the nature of the plants on the drainage, the purifying performances and the quality of the biosolids. Echinochloa pyramidalis, Typha australis, and Phragmites australis are the three macrophytes used in this study. The drainage properties of the beds were monitored through the frequency of clogging, the percentage of recovered leachate and the dryness of the accumulated sludge. The development of plants was followed through the measurement of the density. The purification performances were evaluated from the incoming raw sludge flows and the outflows of leachate for parameters such as Total Solids (TS), Total Suspended Solids (TSS), Total Volatile Solids (TVS), Chemical Oxygen Demand (COD), Total Kjeldahl Nitrogen (TKN), Ammonia (NH₄⁺), Nitrate (NO₃⁻), Total Phosphorus (TP), Orthophosphorus (PO₄³⁻) and Ascaris eggs. The quality of the biosolids accumulated on the beds was measured after 3 months of maturation for parameters such as dryness, C/N ratio NH₄⁺/NO₃⁻ ratio, ammonia, Ascaris eggs. The results have shown that the recovered leachate volume is about 40.4%; 45.6% and 47.3%; the dryness about 41.7%; 38.7% and 28.7%, and clogging frequencies about 6.7%; 8.2% and 14.2% on average for the beds planted with Echinochloa pyramidalis, Typha australis and Phragmites australis respectively. The plants of Echinochloa pyramidalis (198.6 plants/m²) and Phragmites australis (138 plants/m²) have higher densities than Typha australis (90.3 plants/m²). The nature of the plants has no influence on the purification performance with reduction percentages around 80% or more for all the parameters followed whatever the nature of the plants. However, the concentrations of these various leachate pollutants are above the limit values of the Senegalese standard NS 05-061 for the release into the environment. The biosolids harvested after 3 months of maturation are all mature with C/N ratios around 10 for all the macrophytes. The NH₄⁺/NO₃⁻ ratio is lower than 1 except for the biosolids originating from the Echinochloa pyramidalis beds. The ammonia is also less than 0.4 g/kg except for biosolids from Typha australis beds. Biosolids are also rich in mineral elements. Their concentrations of Ascaris eggs are higher than the WHO recommendations despite a percentage of inactivation around 80%. These biosolids must be stored for an additional time or composted. From these results, the use of Echinochloa pyramidalis as the main macrophyte can be recommended in the various drying beds planted in sub-Saharan climate conditions.Keywords: faecal sludge, nature of plants, quality of biosolids, treatment performances
Procedia PDF Downloads 171207 Home Environment and Peer Pressure as Predictors of Disruptive Behaviour and Risky Sexual Behaviour of Secondary School Class Two Adolescents in Enugu State, Nigeria
Authors: Dorothy Ebere Adimora
Abstract:
The study investigated the predictive power of home environment and peer pressure on disruptive behaviour and risky sexual behaviour of Secondary School Class Two Adolescents in Enugu State, Nigeria. The design of the study is a cross sectional survey of correlational study. The study was carried out in the six Education zones in Enugu state, Nigeria. Enugu State is divided into six education zones, namely Agbani, Awgu, Enugu, Nsukka, Obollo-Afor and Udi. The population for the study was all the 31,680 senior secondary class two adolescents in 285 secondary schools in Enugu State, Nigeria in 2014/2015 academic session. The target population was students in SSS.2 senior secondary class two. They constitute one-sixth of the entire student population in the state. The sample of the study was 528, a multi stage sampling technique was employed to draw the sample. Four research questions and four null hypotheses guided the study. The instruments for data collection were an interview session and a structured questionnaire of four clusters, they are; home environment, peer pressure, risky sexual behaviour and disruptive behaviour disorder questionnaires. The instruments were validated by 3 experts, two in psychology and one in measurement and Evaluation in Faculty of Education, University of Nigeria, Nsukka. The reliability coefficient of the instruments was ascertained by subjection to field trial. The adolescents were asked to complete the questionnaire on their home environment, peer pressure, disruptive behaviour disorder and risky sexual behaviours. The risky sexual behaviours were ascertained based on interview conducted on their actual sexual practice within the past 12 months. The research questions were analyzed using Pearson r and R-square, while the hypotheses were tested using ANOVA and multiple regression analysis at 0.05 level of significance. The results of this survey revealed that the adolescents are sexually active in very young ages. The mean age at sexual debut for the adolescents covered in this survey is a pointer to the fact that some of them started engaging in sexual activities long ago. It was also found that the adolescents engage in disruptive behaviour as a result of their poor home environment factors and association with negative peers. Based on the findings, it was recommended that the adolescents should be exposed to enhanced home environment such as parents’ responsiveness, organization of the environment, availability of appropriate learning materials, opportunities for daily stimulation and to offer a proper guidance to these adolescents to avoid negative peer influence which could result in risky sexual behaviour and disruptive behaviour disorder.Keywords: parenting, peer group, adolescents, sexuality, conduct disorder
Procedia PDF Downloads 482206 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution
Authors: Peter G. Hollis, Kim G. Clarke
Abstract:
Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag
Procedia PDF Downloads 266205 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India
Authors: Bharti Singh, Shri K. Singh
Abstract:
Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.Keywords: child nutrition, India, NFHS, women’s empowerment
Procedia PDF Downloads 34204 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR
Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman
Abstract:
Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography
Procedia PDF Downloads 498203 Exploring Instructional Designs on the Socio-Scientific Issues-Based Learning Method in Respect to STEM Education for Measuring Reasonable Ethics on Electromagnetic Wave through Science Attitudes toward Physics
Authors: Adisorn Banhan, Toansakul Santiboon, Prasong Saihong
Abstract:
Using the Socio-Scientific Issues-Based Learning Method is to compare of the blended instruction of STEM education with a sample consisted of 84 students in 2 classes at the 11th grade level in Sarakham Pittayakhom School. The 2-instructional models were managed of five instructional lesson plans in the context of electronic wave issue. These research procedures were designed of each instructional method through two groups, the 40-experimental student group was designed for the instructional STEM education (STEMe) and 40-controlling student group was administered with the Socio-Scientific Issues-Based Learning (SSIBL) methods. Associations between students’ learning achievements of each instructional method and their science attitudes of their predictions to their exploring activities toward physics with the STEMe and SSIBL methods were compared. The Measuring Reasonable Ethics Test (MRET) was assessed students’ reasonable ethics with the STEMe and SSIBL instructional design methods on two each group. Using the pretest and posttest technique to monitor and evaluate students’ performances of their reasonable ethics on electromagnetic wave issue in the STEMe and SSIBL instructional classes were examined. Students were observed and gained experience with the phenomena being studied with the Socio-Scientific Issues-Based Learning method Model. To support with the STEM that it was not just teaching about Science, Technology, Engineering, and Mathematics; it is a culture that needs to be cultivated to help create a problem solving, creative, critical thinking workforce for tomorrow in physics. Students’ attitudes were assessed with the Test Of Physics-Related Attitude (TOPRA) modified from the original Test Of Science-Related Attitude (TOSRA). Comparisons between students’ learning achievements of their different instructional methods on the STEMe and SSIBL were analyzed. Associations between students’ performances the STEMe and SSIBL instructional design methods of their reasonable ethics and their science attitudes toward physics were associated. These findings have found that the efficiency of the SSIBL and the STEMe innovations were based on criteria of the IOC value higher than evidence as 80/80 standard level. Statistically significant of students’ learning achievements to their later outcomes on the controlling and experimental groups with the SSIBL and STEMe were differentiated between students’ learning achievements at the .05 level. To compare between students’ reasonable ethics with the SSIBL and STEMe of students’ responses to their instructional activities in the STEMe is higher than the SSIBL instructional methods. Associations between students’ later learning achievements with the SSIBL and STEMe, the predictive efficiency values of the R2 indicate that 67% and 75% for the SSIBL, and indicate that 74% and 81% for the STEMe of the variances were attributable to their developing reasonable ethics and science attitudes toward physics, consequently.Keywords: socio-scientific issues-based learning method, STEM education, science attitudes, measurement, reasonable ethics, physics classes
Procedia PDF Downloads 294202 A Dynamic Mechanical Thermal T-Peel Test Approach to Characterize Interfacial Behavior of Polymeric Textile Composites
Authors: J. R. Büttler, T. Pham
Abstract:
Basic understanding of interfacial mechanisms is of importance for the development of polymer composites. For this purpose, we need techniques to analyze the quality of interphases, their chemical and physical interactions and their strength and fracture resistance. In order to investigate the interfacial phenomena in detail, advanced characterization techniques are favorable. Dynamic mechanical thermal analysis (DMTA) using a rheological system is a sensitive tool. T-peel tests were performed with this system, to investigate the temperature-dependent peel behavior of woven textile composites. A model system was made of polyamide (PA) woven fabric laminated with films of polypropylene (PP) or PP modified by grafting with maleic anhydride (PP-g-MAH). Firstly, control measurements were performed with solely PP matrixes. Polymer melt investigations, as well as the extensional stress, extensional viscosity and extensional relaxation modulus at -10°C, 100 °C and 170 °C, demonstrate similar viscoelastic behavior for films made of PP-g-MAH and its non-modified PP-control. Frequency sweeps have shown that PP-g-MAH has a zero phase viscosity of around 1600 Pa·s and PP-control has a similar zero phase viscosity of 1345 Pa·s. Also, the gelation points are similar at 2.42*104 Pa (118 rad/s) and 2.81*104 Pa (161 rad/s) for PP-control and PP-g-MAH, respectively. Secondly, the textile composite was analyzed. The extensional stress of PA66 fabric laminated with either PP-control or PP-g-MAH at -10 °C, 25 °C and 170 °C for strain rates of 0.001 – 1 s-1 was investigated. The laminates containing the modified PP need more stress for T-peeling. However, the strengthening effect due to the modification decreases by increasing temperature and at 170 °C, just above the melting temperature of the matrix, the difference disappears. Independent of the matrix used in the textile composite, there is a decrease of extensional stress by increasing temperature. It appears that the more viscous is the matrix, the weaker the laminar adhesion. Possibly, the measurement is influenced by the fact that the laminate becomes stiffer at lower temperatures. Adhesive lap-shear testing at room temperature supports the findings obtained with the T-peel test. Additional analysis of the textile composite at the microscopic level ensures that the fibers are well embedded in the matrix. Atomic force microscopy (AFM) imaging of a cross section of the composite shows no gaps between the fibers and matrix. Measurements of the water contact angle show that the MAH grafted PP is more polar than the virgin-PP, and that suggests a more favorable chemical interaction of PP-g-MAH with PA, compared to the non-modified PP. In fact, this study indicates that T-peel testing by DMTA is a technique to achieve more insights into polymeric textile composites.Keywords: dynamic mechanical thermal analysis, interphase, polyamide, polypropylene, textile composite
Procedia PDF Downloads 129201 Validating the Micro-Dynamic Rule in Opinion Dynamics Models
Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule
Procedia PDF Downloads 163200 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 122199 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance
Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty
Abstract:
One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.Keywords: fouling, monitoring, QCM, water quality
Procedia PDF Downloads 212198 A Standard-Based Competency Evaluation Scale for Preparing Qualified Adapted Physical Education Teachers
Authors: Jiabei Zhang
Abstract:
Although adapted physical education (APE) teacher preparation programs are available in the nation, a consistent standards-based competency evaluation scale for preparing of qualified personnel for teaching children with disabilities in APE cannot be identified in the literature. The purpose of this study was to develop a standard-based competency evaluation scale for assessing qualifications for teaching children with disabilities in APE. Standard-based competencies were reviewed and identified based on research evidence documented as effective in teaching children with disabilities in APE. A standard-based competency scale was developed for assessing qualifications for teaching children with disabilities in APE. This scale included 20 standard-based competencies and a 4-point Likert-type scale for each standard-based competency. The first standard-based competency is knowledgeable of the causes of disabilities and their effects. The second competency is the ability to assess physical education skills of children with disabilities. The third competency is able to collaborate with other personnel. The fourth competency is knowledgeable of the measurement and evaluation. The fifth competency is to understand federal and state laws. The sixth competency is knowledgeable of the unique characteristics of all learners. The seventh competency is the ability to write in behavioral terms for objectives. The eighth competency is knowledgeable of developmental characteristics. The ninth competency is knowledgeable of normal and abnormal motor behaviors. The tenth competency is the ability to analyze and adapt the physical education curriculums. The eleventh competency is to understand the history and the philosophy of physical education. The twelfth competency is to understand curriculum theory and development. The thirteenth competency is the ability to utilize instructional designs and plans. The fourteenth competency is the ability to create and implement physical activities. The fifteenth competency is the ability to utilize technology applications. The sixteenth competency is to understand the value of program evaluation. The seventeenth competency is to understand professional standards. The eighteenth competency is knowledgeable of the focused instruction and individualized interventions. The nineteenth competency is able to complete a research project independently. The twentieth competency is to teach children with disabilities in APE independently. The 4-point Likert-type scale ranges from 1 for incompetent to 4 for highly competent. This scale is used for assessing if one completing all course works is eligible for receiving an endorsement for teaching children with disabilities in APE, which is completed based on the grades earned on three courses targeted for each standard-based competency. A mean grade received in three courses primarily addressing a standard-based competency will be marked on a competency level in the above scale. The level 4 is marked for a mean grade of A one receives over three courses, the level 3 for a mean grade of B over three courses, and so on. One should receive a mean score of 3 (competent level) or higher (highly competent) across 19 standard-based competencies after completing all courses specified for receiving an endorsement for teaching children with disabilities in APE. The validity, reliability, and objectivity of this standard-based competency evaluation scale are to be documented.Keywords: evaluation scale, teacher preparation, adapted physical education teachers, and children with disabilities
Procedia PDF Downloads 117197 The Coaching on Lifestyle Intervention (CooL): Preliminary Results and Implementation Process
Authors: Celeste E. van Rinsum, Sanne M. P. L. Gerards, Geert M. Rutten, Ien A. M. van de Goor, Stef P. J. Kremers
Abstract:
Combined lifestyle interventions have shown to be effective in changing and maintaining behavioral lifestyle changes and reducing overweight and obesity. A lifestyle coach is expected to promote lifestyle changes in adults related to physical activity and diet. The present Coaching on Lifestyle (CooL) study examined participants’ physical activity level, dietary behavioral, and motivational changes immediately after the intervention and at 1.5 years after baseline. In CooL intervention a lifestyle coach coaches individuals from eighteen years and older with (a high risk of) obesity in group and individual sessions. In addition a process evaluation was conducted in order to examine the implementation process and to be able to interpret the changes within the participants. This action-oriented research has a pre-post design. Participants of the CooL intervention (N = 200) completed three questionnaires: at baseline, immediately after the intervention (on average after 44 weeks), and at 1.5 years after baseline. T-tests and linear regressions were conducted to test self-reported changes in physical activity (IPAQ), dietary behaviors, their quality of motivation for physical activity (BREQ-3) and for diet (REBS), body mass index (BMI), and quality of life (EQ-5D-3L). For the process evaluation, we used individual and group interviews, observations and document analyses to gain insight in the implementation process (e.g. the recruitment) and how the intervention was valued by the participants, lifestyle coaches, and referrers. The study is currently ongoing and therefore the results presented here are preliminary. On average, the participants that finished the intervention and those that have completed the long-term measurement improved their level of vigorous-intense physical activity, sedentary behavior, sugar-sweetened beverage consumption and BMI. Mixed results were observed in motivational regulation for physical activity and nutrition. Moreover, an improvement on the quality of life dimension anxiety/depression was found, also in the long-term. All the other constructs did not show significant change over time. The results of the process evaluation have shown that recruitment of clients was difficult. Participants evaluated the intervention positively and the lifestyle coaches have continuously adapted the structure and contents of the intervention throughout the study period, based on their experiences and feedback from research. Preliminary results indicate that the CooL-intervention may have beneficial effects on overweight and obese participants in terms of energy balance-related behaviors, weight reduction, and quality of life. Recruitment of participants and embedding the position of the lifestyle coach in traditional care structures is challenging.Keywords: combined lifestyle intervention, effect evaluation, lifestyle coaching, process evaluation, overweight, the Netherlands
Procedia PDF Downloads 230196 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser
Abstract:
Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride
Procedia PDF Downloads 191195 The Impact of the Global Financial Crisis on the Performance of Czech Industrial Enterprises
Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak
Abstract:
The global financial crisis that erupted in 2008 is associated mainly with the debt crisis. It quickly spread globally through financial markets, international banks and trade links, and affected many economic sectors. Measured by the index of the year-on-year change in GDP and industrial production, the consequences of the global financial crisis manifested themselves with some delay also in the Czech economy. This can be considered a result of the overwhelming export orientation of Czech industrial enterprises. These events offer an important opportunity to study how financial and macroeconomic instability affects corporate performance. Corporate performance factors have long been given considerable attention. It is therefore reasonable to ask whether the findings published in the past are also valid in the times of economic instability and subsequent recession. The decisive factor in effective corporate performance measurement is the existence of an appropriate system of indicators that are able to assess progress in achieving corporate goals. Performance measures may be based on non-financial as well as on financial information. In this paper, financial indicators are used in combination with other characteristics, such as the firm size and ownership structure. Financial performance is evaluated based on traditional performance indicators, namely, return on equity and return on assets, supplemented with indebtedness and current liquidity indices. As investments are a very important factor in corporate performance, their trends and importance were also investigated by looking at the ratio of investments to previous year’s sales and the rate of reinvested earnings. In addition to traditional financial performance indicators, the Economic Value Added was also used. Data used in the research were obtained from a questionnaire survey administered in industrial enterprises in the Czech Republic and from AMADEUS (Analyse Major Database from European Sources), from which accounting data of companies were obtained. Respondents were members of the companies’ senior management. Research results unequivocally confirmed that corporate performance dropped significantly in the 2010-2012 period, which can be considered a result of the global financial crisis and a subsequent economic recession. It was reflected mainly in the decreasing values of profitability indicators and the Economic Value Added. Although the total year-on-year indebtedness declined, intercompany indebtedness increased. This can be considered a result of impeded access of companies to bank loans due to the credit crunch. Comparison of the results obtained with the conclusions of previous research on a similar topic showed that the assumption that firms under foreign control achieved higher performance during the period investigated was not confirmed.Keywords: corporate performance, foreign control, intercompany indebtedness, ratio of investment
Procedia PDF Downloads 334194 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture
Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz
Abstract:
Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV
Procedia PDF Downloads 108193 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea
Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park
Abstract:
Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques
Procedia PDF Downloads 169192 Examination of Corrosion Durability Related to Installed Environments of Steel Bridges
Authors: Jin-Hee Ahn, Seok-Hyeon Jeon, Young-Bin Lee, Min-Gyun Ha, Yu-Chan Hong
Abstract:
Corrosion durability of steel bridges can be generally affected by atmospheric environments of bridge installation, since corrosion problem is related to environmental factors such as humidity, temperature, airborne salt, chemical components as SO₂, chlorides, etc. Thus, atmospheric environment condition should be measured to estimate corrosion condition of steel bridges as well as measurement of actual corrosion damage of structural members of steel bridge. Even in the same atmospheric environment, the corrosion environment may be different depending on the installation direction of structural members. In this study, therefore, atmospheric corrosion monitoring was conducted using atmospheric corrosion monitoring sensor, hygrometer, thermometer and airborne salt collection device to examine the corrosion durability of steel bridges. As a target steel bridge for corrosion durability monitoring, a cable-stayed bridge with truss steel members was selected. This cable-stayed bridge was located on the coast to connect the islands with the islands. Especially, atmospheric corrosion monitoring was carried out depending on structural direction of a cable-stayed bridge with truss type girders since it consists of structural members with various directions. For atmospheric corrosion monitoring, daily average electricity (corrosion current) was measured at each monitoring members to evaluate corrosion environments and corrosion level depending on structural members with various direction which have different corrosion environment in the same installed area. To compare corrosion durability connected with monitoring data depending on corrosion monitoring members, monitoring steel plate was additionally installed in same monitoring members. Monitoring steel plates of carbon steel was fabricated with dimension of 60mm width and 3mm thickness. And its surface was cleaned for removing rust on the surface by blasting, and its weight was measured before its installation on each structural members. After a 3 month exposure period on real atmospheric corrosion environment at bridge, surface condition of atmospheric corrosion monitoring sensors and monitoring steel plates were observed for corrosion damage. When severe deterioration of atmospheric corrosion monitoring sensors or corrosion damage of monitoring steel plates were found, they were replaced or collected. From 3month exposure tests in the actual steel bridge with various structural member with various direction, the rust on the surface of monitoring steel plate was found, and the difference in the corrosion rate was found depending on the direction of structural member from their visual inspection. And daily average electricity (corrosion current) was changed depending on the direction of structural member. However, it is difficult to identify the relative differences in corrosion durability of steel structural members using short-term monitoring results. After long exposure tests in this corrosion environments, it can be clearly evaluated the difference in corrosion durability depending on installed conditions of steel bridges. Acknowledgements: This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B03028755).Keywords: corrosion, atmospheric environments, steel bridge, monitoring
Procedia PDF Downloads 362191 Comparison of Microstructure, Mechanical Properties and Residual Stresses in Laser and Electron Beam Welded Ti–5Al–2.5Sn Titanium Alloy
Authors: M. N. Baig, F. N. Khan, M. Junaid
Abstract:
Titanium alloys are widely employed in aerospace, medical, chemical, and marine applications. These alloys offer many advantages such as low specific weight, high strength to weight ratio, excellent corrosion resistance, high melting point and good fatigue behavior. These attractive properties make titanium alloys very unique and therefore they require special attention in all areas of processing, especially welding. In this work, 1.6 mm thick sheets of Ti-5Al-2,5Sn, an alpha titanium (α-Ti) alloy, were welded using electron beam (EBW) and laser beam (LBW) welding processes to achieve a full penetration Bead-on Plate (BoP) configuration. The weldments were studied using polarized optical microscope, SEM, EDS and XRD. Microhardness distribution across the weld zone and smooth and notch tensile strengths of the weldments were also recorded. Residual stresses using Hole-drill Strain Measurement (HDSM) method and deformation patterns of the weldments were measured for the purpose of comparison of the two welding processes. Fusion zone widths of both EBW and LBW weldments were found to be approximately equivalent owing to fairly similar high power densities of both the processes. Relatively less oxide content and consequently high joint quality were achieved in EBW weldment as compared to LBW due to vacuum environment and absence of any shielding gas. However, an increase in heat-affected zone width and partial ά-martensitic transformation infusion zone of EBW weldment were observed because of lesser cooling rates associated with EBW as compared with LBW. The microstructure infusion zone of EBW weldment comprised both acicular α and ά martensite within the prior β grains whereas complete ά martensitic transformation was observed within the fusion zone of LBW weldment. Hardness of the fusion zone in EBW weldment was found to be lower than the fusion zone of LBW weldment due to the observed microstructural differences. Notch tensile specimen of LBW exhibited higher load capacity, ductility, and absorbed energy as compared with EBW specimen due to the presence of high strength ά martensitic phase. It was observed that the sheet deformation and deformation angle in EBW weldment were more than LBW weldment due to relatively more heat retention in EBW which led to more thermal strains and hence higher deformations and deformation angle. The lowest residual stresses were found in LBW weldments which were tensile in nature. This was owing to high power density and higher cooling rates associated with LBW process. EBW weldment exhibited highest compressive residual stresses due to which the service life of EBW weldment is expected to improve.Keywords: Laser and electron beam welding, Microstructure and mechanical properties, Residual stress and distortions, Titanium alloys
Procedia PDF Downloads 229190 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection
Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi
Abstract:
During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.Keywords: coating, stainless steel, tribology, wear
Procedia PDF Downloads 151189 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis
Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame
Abstract:
Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain
Procedia PDF Downloads 87188 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties
Authors: Hsyi-En Cheng, Ying-Yi Liou
Abstract:
Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide
Procedia PDF Downloads 243187 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 98186 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring
Authors: Zheng Wang, Zhenhong Li, Jon Mills
Abstract:
Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring
Procedia PDF Downloads 163185 The Impact of Emotional Intelligence on Organizational Performance
Authors: El Ghazi Safae, Cherkaoui Mounia
Abstract:
Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.Keywords: emotions, performance, intelligence, firms
Procedia PDF Downloads 108184 Shear Strength Envelope Characteristics of LimeTreated Clays
Authors: Mohammad Moridzadeh, Gholamreza Mesri
Abstract:
The effectiveness of lime treatment of soils has been commonly evaluated in terms of improved workability and increased undrained unconfined compressive strength in connection to road and airfield construction. The most common method of strength measurement has been the unconfined compression test. However, if the objective of lime treatment is to improve long-term stability of first-time or reactivated landslides in stiff clays and shales, permanent changes in the size and shape of clay particles must be realized to increase drained frictional resistance. Lime-soil interactions that may produce less platy and larger soil particles begin and continue with time under the highly alkaline pH environment. In this research, pH measurements are used to monitor chemical environment and progress of reactions. Atterberg limits are measured to identify changes in particle size and shape indirectly. Also, fully softened and residual strength measurements are used to examine an improvement in frictional resistance due to lime-soil interactions. The main variables are soil plasticity and mineralogy, lime content, water content, and curing period. Lime effect on frictional resistance is examined using samples of clays with different mineralogy and characteristics which may react with lime to various extents. Drained direct shear tests on reconstituted lime-treated clay specimens with various properties have been performed to measure fully softened shear strength. To measure residual shear strength, drained multiple reversal direct shear tests on precut specimens were conducted. This way, soil particles are oriented along the direction of shearing to the maximum possible extent and provide minimum frictional resistance. This is applicable to reactivated and part of first-time landslides. The Brenna clay, which is the highly plastic lacustrine clay of Lake Agassiz causing slope instability along the banks of the Red River, is one of the soil samples used in this study. The Brenna Formation characterized as a uniform, soft to firm, dark grey, glaciolacustrine clay with little or no visible stratification, is full of slickensided surfaces. The major source of sediment for the Brenna Formation was the highly plastic montmorillonitic Pierre Shale bedrock. The other soil used in this study is one of the main sources of slope instability in Harris County Flood Control District (HCFCD), i.e. the Beaumont clay. The shear strengths of untreated and treated clays were obtained under various normal pressures to evaluate the shear envelope nonlinearity.Keywords: Brenna clay, friction resistance, lime treatment, residual
Procedia PDF Downloads 159183 Quantum Mechanics as A Limiting Case of Relativistic Mechanics
Authors: Ahmad Almajid
Abstract:
The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics
Procedia PDF Downloads 79182 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy
Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi
Abstract:
The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.Keywords: aerosol, biomass burning, seacoast, urban area
Procedia PDF Downloads 129181 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage
Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni
Abstract:
Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage
Procedia PDF Downloads 124180 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 103