Search results for: skeletal measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2986

Search results for: skeletal measurements

496 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis

Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio

Abstract:

Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.

Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction

Procedia PDF Downloads 309
495 Characterizing Solid Glass in Bending, Torsion and Tension: High-Temperature Dynamic Mechanical Analysis up to 950 °C

Authors: Matthias Walluch, José Alberto Rodríguez, Christopher Giehl, Gunther Arnold, Daniela Ehgartner

Abstract:

Dynamic mechanical analysis (DMA) is a powerful method to characterize viscoelastic properties and phase transitions for a wide range of materials. It is often used to characterize polymers and their temperature-dependent behavior, including thermal transitions like the glass transition temperature Tg, via determination of storage and loss moduli in tension (Young’s modulus, E) and shear or torsion (shear modulus, G) or other testing modes. While production and application temperatures for polymers are often limited to several hundred degrees, material properties of glasses usually require characterization at temperatures exceeding 600 °C. This contribution highlights a high temperature setup for rotational and oscillatory rheometry as well as for DMA in different modes. The implemented standard convection oven enables the characterization of glass in different loading modes at temperatures up to 950 °C. Three-point bending, tension and torsional measurements on different glasses, with E and G moduli as a function of frequency and temperature, are presented. Additional tests include superimposing several frequencies in a single temperature sweep (“multiwave”). This type of test results in a considerable reduction of the experiment time and allows to evaluate structural changes of the material and their frequency dependence. Furthermore, DMA in torsion and tension was performed to determine the complex Poisson’s ratio as a function of frequency and temperature within a single test definition. Tests were performed in a frequency range from 0.1 to 10 Hz and temperatures up to the glass transition. While variations in the frequency did not reveal significant changes of the complex Poisson’s ratio of the glass, a monotonic increase of this parameter was observed when increasing the temperature. This contribution outlines the possibilities of DMA in bending, tension and torsion for an extended temperature range. It allows the precise mechanical characterization of material behavior from room temperature up to the glass transition and the softening temperature interval. Compared to other thermo-analytical methods, like Dynamic Scanning Calorimetry (DSC) where mechanical stress is neglected, the frequency-dependence links measurement results (e.g. relaxation times) to real applications

Keywords: dynamic mechanical analysis, oscillatory rheometry, Poisson's ratio, solid glass, viscoelasticity

Procedia PDF Downloads 83
494 Effect of Male and Female Early Childhood Teacher's Educational Practices on Child' Social Adaptation

Authors: Therese Besnard

Abstract:

Internationally in early childhood education (ECE), the great majority of teachers are women. Some groups believe that a greater male teacher presence in ECE would be beneficial for children, specifically for boys as it could offer a positive male model. It is a common belief that children would benefit from being exposed to both male and female models. Some believe that women are naturally better suited to offer quality care to young children comparatively to men. Some authors bring forth that after equivalent training, differences in the educational practices are purely individual and do not depend on the teacher’s gender. Others believe that a greater male presence in ECE would increase the risk of pedophilia or child abuse. The few scientific studies in this area suggest that differences could exist between male and female ECE teacher, in particular when it comes to play which is the mainstay of the ECE educational program. Male teachers describe themselves as being more playful and having a greater tendency to initiate physical and turbulent play comparatively to female teachers, who describe themselves as favoring games that are calmer and focused on social interaction. Observed directly, male teachers appear more actively engaged in play with children and propose more motor play than female teachers. Furthermore children who have both male and female teachers for one year show less behavior difficulties when compared to children with only female teachers. Despite a variety of viewpoints we don’t know if the educational practices of male ECE teachers, (emotional support, classroom organization or instructional support) are different than the educational practices of female teachers and if these practices are linked with children’s adaptation. This study compares the educational practices of 37 ECE teachers (57 % male) and analyses the link with children' social adaptation (n=221). Educational practices were assessed through observational measurements with the Classroom Assessment Scoring System (CLASS) in a natural class environment. Child social adaptation was assessed with the Social Competence and Behavior Evaluation (SCBE). Observational data reveals no differences between men's and women's scale of the CLASS. Results using Multilevel models analyses suggest that the ability to propose good classroom organization and give good instructional support are linked with better child' social adaptation, and that is always true for men and women teachers. The results are discussed on the basis of their potential impact on future educational interventions.

Keywords: child social adaptation, early childhood education, educational practices, men teacher

Procedia PDF Downloads 373
493 Structure and Magnetic Properties of M-Type Sr-Hexaferrite with Ca, La Substitutions

Authors: Eun-Soo Lim, Young-Min Kang

Abstract:

M-type Sr-hexaferrite (SrFe₁₂O₁₉) have been studied during the past decades because it is the most utilized materials in permanent magnets due to their low price, outstanding chemical stability, and appropriate hard magnetic properties. Many attempts have been made to improve the intrinsic magnetic properties of M-type Sr-hexaferrites (SrM), such as by improving the saturation magnetization (MS) and crystalline anisotropy by cation substitution. It is well proved that the Ca-La-Co substitutions are one of the most successful approaches, which lead to a significant enhancement in the crystalline anisotropy without reducing MS, and thus the Ca-La-Co-doped SrM have been commercialized in high-grade magnet products. In this research, the effect of respective doping of Ca and La into the SrM lattices were studied with assumptions that these elements could substitute both of Fe and Sr sites. The hexaferrite samples of stoichiometric SrFe₁₂O₁₉ (SrM) and the Ca substituted SrM with formulae of Sr₁₋ₓCaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) and SrFe₁₂₋ₓCaₓOₐ (x = 0.1, 0.2, 0.3, 0.4), and also La substituted SrM of Sr₁₋ₓLaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) and SrFe₁₂₋ₓLaₓOₐ (x = 0.1, 0.2, 0.3, 0.4) were prepared by conventional solid state reaction processes. X-ray diffraction (XRD) with a Cu Kα radiation source (λ=0.154056 nm) was used for phase analysis. Microstructural observation was conducted with a field emission scanning electron microscopy (FE-SEM). M-H measurements were performed using a vibrating sample magnetometer (VSM) at 300 K. Almost pure M-type phase could be obtained in the all series of hexaferrites calcined at > 1250 ºC. Small amount of Fe₂O₃ phases were detected in the XRD patterns of Sr₁₋ₓCaₓFe₁₂Oₐ (x = 0.2, 0.3, 0.4) and Sr₁₋ₓLaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) samples. Also, small amount of unidentified secondary phases without the Fe₂O₃ phase were found in the samples of SrFe₁₂₋ₓCaₓOₐ (x = 0.4) and SrFe₁₂₋ₓLaₓOₐ (x = 0.3, 0.4). Although the Ca substitution (x) into SrM structure did not exhibit a clear tendency in the cell parameter change in both series of samples, Sr₁₋ₓCaₓFe₁₂Oₐ and SrFe₁₂₋ₓCaₓOₐ , the cell volume slightly decreased with doping of Ca in the Sr₁₋ₓCaₓFe₁₂Oₐ samples and increased in the SrFe₁₂₋ₓCaₓOₐ samples. Considering relative ion sizes between Sr²⁺ (0.113 nm), Ca²⁺ (0.099 nm), Fe³⁺ (0.064 nm), these results imply that the Ca substitutes both of Sr and Fe in the SrM. A clear tendency of cell parameter change was observed in case of La substitution into Sr site of SrM ( Sr₁₋ₓLaₓFe₁₂Oₐ); the cell volume decreased with increase of x. It is owing to the similar but smaller ion size of La³⁺ (0.106 nm) than that of Sr²⁺. In case of SrFe₁₂₋ₓLaₓOₐ, the cell volume first decreased at x = 0.1 and then remained almost constant with increase of x from 0.2 to 0.4. These results mean that La only substitutes Sr site in the SrM structure. Besides, the microstructure and magnetic properties of these samples, and correlation between them will be revealed.

Keywords: M-type hexaferrite, substitution, cell parameter, magnetic properties

Procedia PDF Downloads 211
492 GC-MS-Based Untargeted Metabolomics to Study the Metabolism of Pectobacterium Strains

Authors: Magdalena Smoktunowicz, Renata Wawrzyniak, Malgorzata Waleron, Krzysztof Waleron

Abstract:

Pectobacterium spp. were previously classified into the Erwinia genus founded in 1917 to unite at that time all Gram-negative, fermentative, nonsporulating and peritrichous flagellated plant pathogenic bacteria. After work of Waldee (1945), on Approved Lists of Bacterial Names and bacteriology manuals in 1980, they were described either under the species named Erwinia or Pectobacterium. The Pectobacterium genus was formally described in 1998 of 265 Pectobacterium strains. Currently, there are 21 species of Pectobacterium bacteria, including Pectobacterium betavasculorum since 2003, which caused soft rot on sugar beet tubers. Based on the biochemical experiments carried out for this, it is known that these bacteria are gram-negative, catalase-positive, oxidase-negative, facultatively anaerobic, using gelatin and causing symptoms of soft rot on potato and sugar beet tubers. The mere fact of growing on sugar beet may indicate a metabolism characteristic only for this species. Metabolomics, broadly defined as the biology of the metabolic systems, which allows to make comprehensive measurements of metabolites. Metabolomics, in combination with genomics, are complementary tools for the identification of metabolites and their reactions, and thus for the reconstruction of metabolic networks. The aim of this study was to apply the GC-MS-based untargeted metabolomics to study the metabolism of P. betavasculorum in different growing conditions. The metabolomic profiles of biomass and biomass media were determined. For sample preparation the following protocol was used: extraction with 900 µl of methanol: chloroform: water mixture (10: 3: 1, v: v) were added to 900 µl of biomass from the bottom of the tube and up to 900 µl of nutrient medium from the bacterial biomass. After centrifugation (13,000 x g, 15 min, 4oC), 300µL of the obtained supernatants were concentrated by rotary vacuum and evaporated to dryness. Afterwards, two-step derivatization procedure was performed before GC-MS analyses. The obtained results were subjected to statistical calculations with the use of both uni- and multivariate tests. The obtained results were evaluated using KEGG database, to asses which metabolic pathways are activated and which genes are responsible for it, during the metabolism of given substrates contained in the growing environment. The observed metabolic changes, combined with biochemical and physiological tests, may enable pathway discovery, regulatory inference and understanding of the homeostatic abilities of P. betavasculorum.

Keywords: GC-MS chromatograpfy, metabolomics, metabolism, pectobacterium strains, pectobacterium betavasculorum

Procedia PDF Downloads 78
491 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (AST-to-PLT, APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. Recently, they have found clinical use as measures of liver fibrosis and cardiovascular risk. However, their status in children has not been evaluated in detail yet. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of sixty-eight children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, thirty-five children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised thirty-three OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p<0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r=0.336 and p=0.065 in the N-BMI group. On the other hand, they were r=0.707 and p=0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p<0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. Triglycerides are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. Triglycerides are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs FIB-4 correlations in N-BMI and OB groups (no correlation versus high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.

Keywords: APRI, children, FIB-4, obesity, triglycerides

Procedia PDF Downloads 348
490 Maneuvering Modelling of a One-Degree-of-Freedom Articulated Vehicle: Modeling and Experimental Verification

Authors: Mauricio E. Cruz, Ilse Cervantes, Manuel J. Fabela

Abstract:

The evaluation of the maneuverability of road vehicles is generally carried out through the use of specialized computer programs due to the advantages they offer compared to the experimental method. These programs are based on purely geometric considerations of the characteristics of the vehicles, such as main dimensions, the location of the axles, and points of articulation, without considering parameters such as weight distribution and magnitude, tire properties, etc. In this paper, we address the problem of maneuverability in a semi-trailer truck to navigate urban streets, maneuvering yards, and parking lots, using the Ackerman principle to propose a kinematic model that, through geometric considerations, it is possible to determine the space necessary to maneuver safely. The model was experimentally validated by conducting maneuverability tests with an articulated vehicle. The measurements were made through a GPS that allows us to know the position, trajectory, and speed of the vehicle, an inertial motion unit (IMU) that allows measuring the accelerations and angular speeds in the semi-trailer, and an instrumented steering wheel that allows measuring the angle of rotation of the flywheel, the angular velocity and the torque applied to the flywheel. To obtain the steering angle of the tires, a parameterization of the complete travel of the steering wheel and its equivalent in the tires was carried out. For the tests, 3 different angles were selected, and 3 turns were made for each angle in both directions of rotation (left and right turn). The results showed that the proposed kinematic model achieved 95% accuracy for speeds below 5 km / h. The experiments revealed that that tighter maneuvers increased significantly the space required and that the vehicle maneuverability was limited by the size of the semi-trailer. The maneuverability was also tested as a function of the vehicle load and 3 different load levels we used: light, medium, and heavy. It was found that the internal turning radii also increased with the load, probably due to the changes in the tires' adhesion to the pavement since heavier loads had larger contact wheel-road surfaces. The load was found as an important factor affecting the precision of the model (up to 30%), and therefore I should be considered. The model obtained is expected to be used to improve maneuverability through a robust control system.

Keywords: articuled vehicle, experimental validation, kinematic model, maneuverability, semi-trailer truck

Procedia PDF Downloads 117
489 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 268
488 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 197
487 Weight Loss and Symptom Improvement in Women with Secondary Lymphedema Using Semaglutide

Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal

Abstract:

The prevalence of lymphedema in women in rural communities highlights the importance of developing effective treatment and prevention methods. Subjects with secondary lymphedema in California’s Central Valley were surveyed at 6 surgical clinics to assess demographics and symptoms of lymphedema. Additionally, subjects on semaglutide treatment for obesity and/or T2DM were monitored for their diabetes management, weight loss progress, and lymphedema symptoms compared to subjects who were not treated with semaglutide. The subjects were followed for 12 months. Subjects who were treated with semaglutide completed pre-treatment questionnaires and follow-up post-treatment questionnaires at 3, 6, 9, 12 months, along with medical assessment. The untreated subjects completed similar questionnaires. The questionnaires investigated subjective feelings regarding lymphedema symptoms and management using a Likert-scale; quantitative leg measurements were collected, and blood work reviewed at these appointments. Paired difference t-tests, chi-squared tests, and independent sample t-tests were performed. 50 subjects, aged 18-75 years, completed the surveys evaluating secondary lymphedema: 90% female, 69% Hispanic, 45% Spanish speaking, 42% disabled, 57 % employed, 54% income range below 30 thousand dollars, and average BMI of 40. Both treatment and non-treatment groups noted the most common symptoms were leg swelling (x̄=3.2, ▁d= 1.3), leg pain (x̄=3.2, ▁d=1.6 ), loss of daily function (x̄=3, ▁d=1.4 ), and negative body image (x̄=4.4, ▁d=0.54). Subjects in the semaglutide treatment group >3 months of treatment compared to the untreated group demonstrated: 55% subject in the treated group had a 10% weight loss vs 3% in the untreated group (average BMI reduction by 11% vs untreated by 2.5%, p<0.05) and improved subjective feelings about their lymphedema symptoms: leg swelling (x̄=2.4, ▁d=0.45 vs x̄=3.2, ▁d=1.3, p<0.05), leg pain (x̄=2.2, ▁d=0.45 vs x̄= 3.2, ▁d= 1.6, p<0.05), and heaviness (x̄=2.2, ▁d=0.45 vs x̄=3, ▁d=1.56, p<0.05). Improvement in diabetes management was demonstrated by an average of 0.9 % decrease in A1C values compared to untreated 0.1 %, p<0.05. In comparison to untreated subjects, treatment subjects on semaglutide noted 6 cm decrease in the circumference of the leg, knee, calf, and ankle compared to 2 cm in untreated subjects, p<0.05. Semaglutide was shown to significantly improve weight loss, T2DM management, leg circumference, and secondary lymphedema functional, physical and psychosocial symptoms.

Keywords: diabetes, secondary lymphedema, semaglutide, obesity

Procedia PDF Downloads 61
486 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 94
485 Effective Apixaban Clearance with Cytosorb Extracorporeal Hemoadsorption

Authors: Klazina T. Havinga, Hilde R. H. de Geus

Abstract:

Introduction: Pre-operative coagulation management of Apixaban prescribed patients, a new oral anticoagulant (a factor Xa inhibitor), is difficult, especially when chronic kidney disease (CKD) causes drug overdose. Apixaban is not dialyzable due to its high level of protein binding. An antidote, Andexanet α, is available but expensive and has an unfavorable short half-life. We report the successful extracorporeal removal of Apixaban prior to emergency surgery with the CytoSorb® Hemoadsorption device. Methods: A 89-year-old woman with CKD, with an Apixaban prescription for atrial fibrillation, was presented at the ER with traumatic rib fractures, a flail chest, and an unstable spinal fracture (T12) for which emergency surgery was indicated. However, due to very high Apixaban levels, this surgery had to be postponed. Based on the Apixaban-specific anti-factor Xa activity (AFXaA) measurements at admission and 10 hours later, complete clearance was expected after 48 hours. In order to enhance the Apixaban removal and reduce the time to operation, and therefore reduce pulmonary complications, CRRT with CytoSorb® cartridge was initiated. Apixaban-specific anti-factor Xa activity (AFXaA) was measured frequently as a substitute for Apixaban drug concentrations, pre- and post adsorber, in order to calculate the adsorber-related clearance. Results: The admission AFXaA concentration, as a substitute for Apixaban drug levels, was 218 ng/ml, which decreased to 157 ng/ml after ten hours. Due to sustained anticoagulation effects, surgery was again postponed. However, the AFXaA levels decreased quickly to sub-therapeutic levels after CRRT (Multifiltrate Pro, Fresenius Medical Care, Blood flow 200 ml/min, Dialysate Flow 4000 ml/h, Prescribed renal dose 51 ml-kg-h) with Cytosorb® connected in series into the circuit was initiated (within 5 hours). The adsorber-related (indirect) Apixaban clearance was calculated every half hour (Cl=Qe * (AFXaA pre- AFXaA post/ AFXaA pre) with Qe=plasma flow rate calculated with Ht=0.38 and system blood flow rate 200 ml-min): 100 ml/min, 72 ml/min and 57 ml/min. Although, as expected, the adsorber-related clearance decreased quickly due to saturation of the beads, still the reduction rate achieved resulted in a very rapid decrease in AFXaA levels. Surgery was ordered and possible within 5 hours after Cytosorb initiation. Conclusion: The CytoSorb® Hemoadsorption device enabled rapid correction of Apixaban associated anticoagulation.

Keywords: Apixaban, CytoSorb, emergency surgery, Hemoadsorption

Procedia PDF Downloads 155
484 Hybrid Materials Obtained via Sol-Gel Way, by the Action of Teraethylorthosilicate with 1, 3, 4-Thiadiazole 2,5-Bifunctional Compounds

Authors: Afifa Hafidh, Fathi Touati, Ahmed Hichem Hamzaoui, Sayda Somrani

Abstract:

The objective of the present study has been to synthesize and to characterize silica hybrid materials using sol-gel technic and to investigate their properties. Silica materials were successfully fabricated using various bi-functional 1,3,4-thiadiazoles and tetraethoxysilane (TEOS) as co-precursors via a facile one-pot sol-gel pathway. TEOS was introduced at room temperature with 1,3,4-thiadiazole 2,5-difunctiunal adducts, in ethanol as solvent and using HCl acid as catalyst. The sol-gel process lead to the formation of monolithic, coloured and transparent gels. TEOS was used as a principal network forming agent. The incorporation of 1,3,4-thiadiazole molecules was realized by attachment of these later onto a silica matrix. This allowed covalent linkage between organic and inorganic phases and lead to the formation of Si-N and Si-S bonds. The prepared hybrid materials were characterized by Fourier transform infrared, NMR ²⁹Si and ¹³C, scanning electron microscopy and nitrogen absorption-desorption measurements. The optic and magnetic properties of hybrids are studied respectively by ultra violet-visible spectroscopy and electron paramagnetic resonance. It was shown in this work, that heterocyclic moieties were successfully attached in the hybrid skeleton. The formation of the Si-network composed of cyclic units (Q3 structures) connected by oxygen bridges (Q4 structures) was proved by ²⁹Si NMR spectroscopy. The Brunauer-Elmet-Teller nitrogen adsorption-desorption method shows that all the prepared xerogels have isotherms type IV and are mesoporous solids. The specific surface area and pore volume of these materials are important. The obtained results show that all materials are paramagnetic semiconductors. The data obtained by Nuclear magnetic resonance ²⁹Si and Fourier transform infrared spectroscopy, show that Si-OH and Si-NH groups existing in silica hybrids can participate in adsorption interactions. The obtained materials containing reactive centers could exhibit adsorption properties of metal ions due to the presence of OH and NH functionality in the mesoporous frame work. Our design of a simple method to prepare hybrid materials may give interest of the development of mesoporous hybrid systems and their use within the domain of environment in the future.

Keywords: hybrid materials, sol-gel process, 1, 3, 4-thiadaizole, TEOS

Procedia PDF Downloads 180
483 Optical and Surface Characteristics of Direct Composite, Polished and Glazed Ceramic Materials After Exposure to Tooth Brush Abrasion and Staining Solution

Authors: Maryam Firouzmandi, Moosa Miri

Abstract:

Aim and background: esthetic and structural reconstruction of anterior teeth may require the application of different restoration material. In this regard combination of direct composite veneer and ceramic crown is a common treatment option. Despite the initial matching, their long term harmony in term of optical and surface characteristics is a matter of concern. The purpose of this study is to evaluate and compare optical and surface characteristic of direct composite polished and glazed ceramic materials after exposure to tooth brush abrasion and staining solution. Materials and Methods: ten 2 mm thick disk shape specimens were prepared from IPS empress direct composite and twenty specimens from IPS e.max CAD blocks. Composite specimens and ten ceramic specimens were polished by using D&Z composite and ceramic polishing kit. The other ten specimens of ceramic were glazed with glazing liquid. Baseline measurement of roughness, CIElab coordinate, and luminance were recorded. Then the specimens underwent thermocycling, tooth brushing, and coffee staining. Afterword, the final measurements were recorded. Color coordinate were used to calculate ΔE76, ΔE00, translucency parameter, and contrast ratio. Data were analyzed by One-way ANOVA and post hoc LSD test. Results: baseline and final roughness of the study group were not different. At baseline, the order of roughness for the study group were as follows: composite < glazed ceramic < polished ceramic, but after aging, no difference. Between ceramic groups was not detected. The comparison of baseline and final luminance was similar to roughness but in reverse order. Unlike differential roughness which was comparable between the groups, changes in luminance of the glazed ceramic group was higher than other groups. ΔE76 and ΔE00 in the composite group were 18.35 and 12.84, in the glazed ceramic group were 1.3 and 0.79, and in polished ceramic were 1.26 and 0.85. These values for the composite group were significantly different from ceramic groups. Translucency of composite at baseline was significantly higher than final, but there was no significant difference between these values in ceramic groups. Composite was more translucency than ceramic at baseline and final measurement. Conclusion: Glazed ceramic surface was smoother than polished ceramic. Aging did not change the roughness. Optical properties (color and translucency) of the composite were influenced by aging. Luminance of composite, glazed ceramic, and polished ceramic decreased after aging, but the reduction in glazed ceramic was more pronounced.

Keywords: ceramic, tooth-brush abrasion, staining solution, composite resin

Procedia PDF Downloads 185
482 Adequacy of Antenatal Care and Its Relationship with Low Birth Weight in Botucatu, São Paulo, Brazil: A Case-Control Study

Authors: Cátia Regina Branco da Fonseca, Maria Wany Louzada Strufaldi, Lídia Raquel de Carvalho, Rosana Fiorini Puccini

Abstract:

Background: Birth weight reflects gestational conditions and development during the fetal period. Low birth weight (LBW) may be associated with antenatal care (ANC) adequacy and quality. The purpose of this study was to analyze ANC adequacy and its relationship with LBW in the Unified Health System in Brazil. Methods: A case-control study was conducted in Botucatu, São Paulo, Brazil, 2004 to 2008. Data were collected from secondary sources (the Live Birth Certificate), and primary sources (the official medical records of pregnant women). The study population consisted of two groups, each with 860 newborns. The case group comprised newborns weighing less than 2,500 grams, while the control group comprised live newborns weighing greater than or equal to 2,500 grams. Adequacy of ANC was evaluated according to three measurements: 1. Adequacy of the number of ANC visits adjusted to gestational age; 2. Modified Kessner Index; and 3. Adequacy of ANC laboratory studies and exams summary measure according to parameters defined by the Ministry of Health in the Program for Prenatal and Birth Care Humanization. Results: Analyses revealed that LBW was associated with the number of ANC visits adjusted to gestational age (OR = 1.78, 95% CI 1.32-2.34) and the ANC laboratory studies and exams summary measure (OR = 4.13, 95% CI 1.36-12.51). According to the modified Kessner Index, 64.4% of antenatal visits in the LBW group were adequate, with no differences between groups. Conclusions: Our data corroborate the association between inadequate number of ANC visits, laboratory studies and exams, and increased risk of LBW newborns. No association was found between the modified Kessner Index as a measure of adequacy of ANC and LBW. This finding reveals the low indices of coverage for basic actions already well regulated in the Health System in Brazil. Despite the association found in the study, we cannot conclude that LBW would be prevented only by an adequate ANC, as LBW is associated with factors of complex and multifactorial etiology. The results could be used to plan monitoring measures and evaluate programs of health care assistance during pregnancy, at delivery and to newborns, focusing on reduced LBW rates.

Keywords: low birth weight, antenatal care, prenatal care, adequacy of health care, health evaluation, public health system

Procedia PDF Downloads 431
481 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment

Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali

Abstract:

Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.

Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells

Procedia PDF Downloads 94
480 Evaluation of Human Amnion Hemocompatibility as a Substitute for Vessels

Authors: Ghasem Yazdanpanah, Mona Kakavand, Hassan Niknejad

Abstract:

Objectives: An important issue in tissue engineering (TE) is hemocompatibility. The current engineered vessels are seriously at risk of thrombus formation and stenosis. Amnion (AM) is the innermost layer of fetal membranes that consists of epithelial and mesenchymal sides. It has the advantages of low immunogenicity, anti-inflammatory and anti-bacterial properties as well as good mechanical properties. We recently introduced the amnion as a natural biomaterial for tissue engineering. In this study, we have evaluated hemocompatibility of amnion as potential biomaterial for tissue engineering. Materials and Methods: Amnions were derived from placentas of elective caesarean deliveries which were in the gestational ages 36 to 38 weeks. Extracted amnions were washed by cold PBS to remove blood remnants. Blood samples were obtained from healthy adult volunteers who had not previously taken anti-coagulants. The blood samples were maintained in sterile tubes containing sodium citrate. Plasma or platelet rich plasma (PRP) were collected by blood sample centrifuging at 600 g for 10 min. Hemocompatibility of the AM samples (n=7) were evaluated by measuring of activated partial thromboplastin time (aPTT), prothrombin time (PT), hemolysis, and platelet aggregation tests. P-selectin was also assessed by ELISA. Both epithelial and mesenchymal sides of amnion were evaluated. Glass slide and expanded polytetrafluoroethylene (ePTFE) samples were defined as control. Results: In comparison with glass as control (13.3 ± 0.7 s), prothrombin time was increased significantly while each side of amnion was in contact with plasma (p<0.05). There was no significant difference in PT between epithelial and mesenchymal surfaces (17.4 ± 0.7 s vs. 15.8 ± 0.7 s, respectively). However, aPPT was not significantly changed after incubation of plasma with amnion epithelial and mesenchymal surfaces or glass (28.61 ± 1.39 s, 31.4 ± 2.66 s, glass, 30.76 ± 2.53 s, respectively, p>0.05). Amnion surfaces, ePTFE and glass samples have less hemolysis induction than water considerably (p<0.001), in which no differences were detected. Platelet aggregation measurements showed that platelets were less stimulated by the amnion epithelial and mesenchymal sides, in comparison with ePTFE and glass. In addition, reduction in amount of p-selectin, as platelet activation factor, after incubation of samples with PRP indicated that amnion has less stimulatory effects on platelets than ePTFE and glass. Conclusion: Amnion as a natural biomaterial has the potential to be used in tissue engineering. Our results suggest that amnion has appropriate hemocompatibility to be employed as a vascular substitute.

Keywords: amnion, hemocompatibility, tissue engineering, biomaterial

Procedia PDF Downloads 395
479 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems

Authors: Joachim F. Sartor

Abstract:

According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.

Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage

Procedia PDF Downloads 151
478 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study

Authors: D. M. Samartsev, A. G. Copping

Abstract:

As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.

Keywords: analysis, architecture, automation, design process, technology

Procedia PDF Downloads 104
477 Preventive Effect of Locoregional Analgesia Techniques on Chronic Post-Surgical Neuropathic Pain: A Prospective Randomized Study

Authors: Beloulou Mohamed Lamine, Bouhouf Attef, Meliani Walid, Sellami Dalila, Lamara Abdelhak

Abstract:

Introduction: Post-surgical chronic pain (PSCP) is a pathological condition with a rather complex etiopathogenesis that extensively involves sensitization processes and neuronal damage. The neuropathic component of these pains is almost always present, with variable expression depending on the type of surgery. Objective: To assess the presumed beneficial effect of Regional Anesthesia-Analgesia Techniques (RAAT) on the development of post-surgical chronic neuropathic pain (PSCNP) in various surgical procedures. Patients and Methods: A comparative study involving 510 patients distributed across five surgical models (mastectomy, thoracotomy, hernioplasty, cholecystectomy, and major abdominal-pelvic surgery) and randomized into two groups: Group A (240) receiving conventional postoperative analgesia and Group B (270) receiving balanced analgesia, including the implementation of a Regional Anesthesia-Analgesia Technique (RAAT). These patients were longitudinally followed over a 6-month period, with post-surgical chronic neuropathic pain (PSCNP) defined by a Neuropathic Pain Score DN2≥ 3. Comparative measurements through univariate and multivariate analyses were performed to identify associations between the development of PSCNP and certain predictive factors, including the presumed preventive impact (protective effect) of RAAT. Results: At the 6th month post-surgery, 419 patients were analyzed (Group A= 196 and Group B= 223). The incidence of PSCNP was 32.2% (n=135). Among these patients with chronic pain, the prevalence of neuropathic pain was 37.8% (95% CI: [29.6; 46.5]), with n=51/135. It was significantly lower in Group B compared to Group A, with respective percentages of 31.4% vs. 48.8% (p-value = 0.035). The most significant differences were observed in breast and thoracopulmonary surgeries. In a multiple regression analysis, two predictors of PSCNP were identified: the presence of preoperative pain at the surgical site as a risk factor (OR: 3.198; 95% CI [1.326; 7.714]) and RAAT as a protective factor (OR: 0.408; 95% CI [0.173; 0.961]). Conclusion: The neuropathic component of PSCNP can be observed in different types of surgeries. Regional analgesia included in a multimodal approach to postoperative pain management has proven to be effective for acute pain and seems to have a preventive impact on the development of PSCNP and its neuropathic nature or component, particularly in surgeries that are more prone to chronicization.

Keywords: chronic postsurgical pain, postsurgical chronic neuropathic pain, regional anesthesia and analgesia techniques (RAAT), neuropathic pain score dn2, preventive impact

Procedia PDF Downloads 27
476 Evaluating Radiation Dose for Interventional Radiologists Performing Spine Procedures

Authors: Kholood A. Baron

Abstract:

While radiologist numbers specialized in spine interventional procedures are limited in Kuwait, the number of patients demanding these procedures is increasing rapidly. Due to this high demand, the workload of radiologists is increasing, which might represent a radiation exposure concern. During these procedures, the doctor’s hands are in very close proximity to the main radiation beam/ if not within it. The aim of this study is to measure the radiation dose for radiologists during several interventional procedures for the spine. Methods: Two doctors carrying different workloads were included. (DR1) was performing procedures in the morning and afternoon shifts, while (DR2) was performing procedures in the morning shift only. Comparing the radiation exposures that the hand of each doctor is receiving will assess radiation safety and help to set up workload regulations for radiologists carrying a heavy schedule of such procedures. Entrance Skin Dose (ESD) was measured via TLD (ThermoLuminescent Dosimetry) placed at the right wrist of the radiologists. DR1 was covering the morning shift in one hospital (Mubarak Al-Kabeer Hospital) and the afternoon shift in another hospital (Dar Alshifa Hospital). The TLD chip was placed in his gloves during the 2 shifts for a whole week. Since DR2 was covering the morning shift only in Al Razi Hospital, he wore the TLD during the morning shift for a week. It is worth mentioning that DR1 was performing 4-5 spine procedures/day in the morning and the same number in the afternoon and DR2 was performing 5-7 procedures/day. This procedure was repeated for 4 consecutive weeks in order to calculate the ESD value that a hand receives in a month. Results: In general, radiation doses that the hand received in a week ranged from 0.12 to 1.12 mSv. The ESD values for DR1 for the four consecutive weeks were 1.12, 0.32, 0.83, 0.22 mSv, thus for a month (4 weeks), this equals 2.49 mSv and calculated to be 27.39 per year (11 months-since each radiologist have 45 days of leave in each year). For DR2, the weekly ESD values are 0.43, 0.74, 0.12, 0.61 mSv, and thus, for a month, this equals 1.9 mSv, and for a year, this equals 20.9 mSv /year. These values are below the standard level and way below the maximum limit of 500 mSv per year (set by ICRP = International Council of Radiation Protection). However, it is worth mentioning that DR1 was a senior consultant and hence needed less fluoro-time during each procedure. This is evident from the low ESD values of the second week (0.32) and the fourth week (0.22), even though he was performing nearly 10-12 procedures in a day /5 days a week. These values were lower or in the same range as those for DR2 (who was a junior consultant). This highlighted the importance of increasing the radiologist's skills and awareness of fluoroscopy time effect. In conclusion, the radiation dose that radiologists received during spine interventional radiology in our setting was below standard dose limits.

Keywords: radiation protection, interventional radiology dosimetry, ESD measurements, radiologist radiation exposure

Procedia PDF Downloads 58
475 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 520
474 An Advanced Numerical Tool for the Design of Through-Thickness Reinforced Composites for Electrical Applications

Authors: Bing Zhang, Jingyi Zhang, Mudan Chen

Abstract:

Fibre-reinforced polymer (FRP) composites have been extensively utilised in various industries due to their high specific strength, e.g., aerospace, renewable energy, automotive, and marine. However, they have relatively low electrical conductivity than metals, especially in the out-of-plane direction. Conductive metal strips or meshes are typically employed to protect composites when designing lightweight structures that may be subjected to lightning strikes, such as composite wings. Unfortunately, this approach downplays the lightweight advantages of FRP composites, thereby limiting their potential applications. Extensive studies have been undertaken to improve the electrical conductivity of FRP composites. The authors are amongst the pioneers who use through-thickness reinforcement (TTR) to tailor the electrical conductivity of composites. Compared to the conventional approaches using conductive fillers, the through-thickness reinforcement approach has been proven to be able to offer a much larger improvement to the through-thickness conductivity of composites. In this study, an advanced high-fidelity numerical modelling strategy is presented to investigate the effects of through-thickness reinforcement on both the in-plane and out-of-plane electrical conductivities of FRP composites. The critical micro-structural features of through-thickness reinforced composites incorporated in the modelling framework are 1) the fibre waviness formed due to TTR insertion; 2) the resin-rich pockets formed due to resin flow in the curing process following TTR insertion; 3) the fibre crimp, i.e., fibre distortion in the thickness direction of composites caused by TTR insertion forces. In addition, each interlaminar interface is described separately. An IMA/M21 composite laminate with a quasi-isotropic stacking sequence is employed to calibrate and verify the modelling framework. The modelling results agree well with experimental measurements for bothering in-plane and out-plane conductivities. It has been found that the presence of conductive TTR can increase the out-of-plane conductivity by around one order, but there is less improvement in the in-plane conductivity, even at the TTR areal density of 0.1%. This numerical tool provides valuable references as a design tool for through-thickness reinforced composites when exploring their electrical applications. Parametric studies are undertaken using the numerical tool to investigate critical parameters that affect the electrical conductivities of composites, including TTR material, TTR areal density, stacking sequence, and interlaminar conductivity. Suggestions regarding the design of electrical through-thickness reinforced composites are derived from the numerical modelling campaign.

Keywords: composite structures, design, electrical conductivity, numerical modelling, through-thickness reinforcement

Procedia PDF Downloads 88
473 Physical Properties Characterization of Shallow Aquifer and Groundwater Quality Using Geophysical Method Based on Electrical Resistivity Tomography in Arid Region, Northeastern Area of Tunisia: A Study Case of Smar Aquifer

Authors: Nesrine Frifita

Abstract:

In recent years, serious interest in underground sources has led to more intensive studies of depth, thickness, geometry and properties of aquifers. Geophysical method is the common technique used in discovering the subsurface. However, determining the exact location of groundwater in subsurface layers is one of problems that needs to be resolved. While the biggest problem is the quality of the groundwater which suffers from pollution risk especially with water shortage in arid regions under a remarkable climate change. The present study was conducted using electrical resistivity tomography at Jeffara coastal area in Southeast Tunisia to image the potential shallow aquifer and studying their physical properties. The purpose of this study is to understand the characteristics and depth of the Smar aquifer. Therefore, it can be used as a reference in groundwater drilling in order to guide the farmers and to improve the living of the inhabitants of nearby cities. The use of the Winner-Schlumberger array for data acquisition is suitable to obtain a deeper profile in areas with homogeneous layers. For that, six electrical resistivity profiles were carried out in Smar watershed using 72 electrodes with 4 and 5 m spacing. The resistivity measurements were carefully interpreted by a least-square inversion technique using the RES2DINV program. Findings show that the Smar aquifer has about 31 m thickness and it extends to 36.5 m depth in the downstream area of Oued Smar. The defined depth and geometry of Smar aquifer indicate that the sedimentary cover thins toward the coast, and the Smar shallow aquifer becomes deeper toward the West. While the resistivity values show a significant contrast even reaching < 1 Ωm in ERT1, this resistivity value can be related to the saline water that foretells a risk of pollution and bad groundwater quality. The ERT1 geoelectrical model defines an unsaturated zone, while under ERT3 site, the geoelectrical model presents a saturated zone, which reflect a low resistivity values indicate the locally surface water coming from the nearby Office of the National Sanitation Utility (ONAS) that can be a source of recharge of the studied shallow aquifer and more deteriorate the groundwater quality in this region.

Keywords: electrical resistivity tomography, groundwater, recharge, smar aquifer, southeastern tunisia

Procedia PDF Downloads 74
472 Developing a Framework for Assessing and Fostering the Sustainability of Manufacturing Companies

Authors: Ilaria Barletta, Mahesh Mani, Björn Johansson

Abstract:

The concept of sustainability encompasses economic, environmental, social and institutional considerations. Sustainable manufacturing (SM) is, therefore, a multi-faceted concept. It broadly implies the development and implementation of technologies, projects and initiatives that are concerned with the life cycle of products and services, and are able to bring positive impacts to the environment, company stakeholders and profitability. Because of this, achieving SM-related goals requires a holistic, life-cycle-thinking approach from manufacturing companies. Further, such an approach must rely on a logic of continuous improvement and ease of implementation in order to be effective. Currently, there exists in the academic literature no comprehensively structured frameworks that support manufacturing companies in the identification of the issues and the capabilities that can either hinder or foster sustainability. This scarcity of support extends to difficulties in obtaining quantifiable measurements in order to objectively evaluate solutions and programs and identify improvement areas within SM for standards conformance. To bridge this gap, this paper proposes the concept of a framework for assessing and continuously improving the sustainability of manufacturing companies. The framework addresses strategies and projects for SM and operates in three sequential phases: analysis of the issues, design of solutions and continuous improvement. A set of interviews, observations and questionnaires are the research methods to be used for the implementation of the framework. Different decision-support methods - either already-existing or novel ones - can be 'plugged into' each of the phases. These methods can assess anything from business capabilities to process maturity. In particular, the authors are working on the development of a sustainable manufacturing maturity model (SMMM) as decision support within the phase of 'continuous improvement'. The SMMM, inspired by previous maturity models, is made up of four maturity levels stemming from 'non-existing' to 'thriving'. Aggregate findings from the use of the framework should ultimately reveal to managers and CEOs the roadmap for achieving SM goals and identify the maturity of their companies’ processes and capabilities. Two cases from two manufacturing companies in Australia are currently being employed to develop and test the framework. The use of this framework will bring two main benefits: enable visual, intuitive internal sustainability benchmarking and raise awareness of improvement areas that lead companies towards an increasingly developed SM.

Keywords: life cycle management, continuous improvement, maturity model, sustainable manufacturing

Procedia PDF Downloads 266
471 Satellite Multispectral Remote Sensing of Ozone Pollution

Authors: Juan Cuesta

Abstract:

Satellite observation is a fundamental component of air pollution monitoring systems, such as the large-scale Copernicus Programme. Next-generation satellite sensors, in orbit or programmed in the future, offer great potential to observe major air pollutants, such as tropospheric ozone, with unprecedented spatial and temporal coverage. However, satellite approaches developed for remote sensing of tropospheric ozone are based solely on measurements from a single instrument in a specific spectral range, either thermal infrared or ultraviolet. These methods offer sensitivity to tropospheric ozone located at the lowest at 3 or 4 km altitude above the surface, thus limiting their applications for ozone pollution analysis. Indeed, no current observation of a single spectral domain provides enough information to accurately measure ozone in the atmospheric boundary layer. To overcome this limitation, we have developed a multispectral synergism approach, called "IASI+GOME2", at the Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA) laboratory. This method is based on the synergy of thermal infrared and ultraviolet observations of respectively the Infrared Atmospheric Sounding Interferometer (IASI) and the Global Ozone Monitoring Experiment-2 (GOME-2) sensors embedded in MetOp satellites that have been in orbit since 2007. IASI+GOME2 allowed the first satellite observation of ozone plumes located between the surface and 3 km of altitude (what we call the lowermost troposphere), as it offers significant sensitivity in this layer. This represents a major advance for the observation of ozone in the lowermost troposphere and its application to air quality analysis. The ozone abundance derived by IASI+GOME2 shows a good agreement with respect to independent observations of ozone based on ozone sondes (a low mean bias, a linear correlation larger than 0.8 and a mean precision of about 16 %) around the world during all seasons. Using IASI+GOME2, lowermost tropospheric ozone pollution plumes are quantified both in terms of concentrations and also in the amounts of ozone photo-chemically produced along transport and also enabling the characterization of the ozone pollution, such as what occurred during the lockdowns linked to the COVID-19 pandemic. The current paper will show the IASI+GOME2 multispectral approach to observe the lowermost tropospheric ozone from space and an overview of several applications on different continents and at a global scale.

Keywords: ozone pollution, multispectral synergism, satellite, air quality

Procedia PDF Downloads 81
470 The Evaporation Study of 1-ethyl-3-methylimidazolium chloride

Authors: Kirill D. Semavin, Norbert S. Chilingarov, Eugene.V. Skokan

Abstract:

The ionic liquids (ILs) based on imidazolium cation are well known nowadays. The changing anions and substituents in imidazolium ring may lead to different physical and chemical properties of ILs. It is important that such ILs with halogen as anion are characterized by a low thermal stability. The data about thermal stability of 1-ethyl-3-methylimidazolium chloride are ambiguous. In the works of last years, thermal stability of this IL was investigated by thermogravimetric analysis and obtained results are contradictory. Moreover, in the last study, it was shown that the observed temperature of the beginning of decomposition significantly depends on the experimental conditions, for example, the heating rate of the sample. The vapor pressure of this IL is not presented at the literature. In this study, the vapor pressure of 1-ethyl-3-methylimidazolium chloride was obtained by Knudsen effusion mass-spectrometry (KEMS). The samples of [ЕMIm]Cl (purity > 98%) were supplied by Sigma–Aldrich and were additionally dried at dynamic vacuum (T = 60 0C). Preliminary procedures with Il were derived into glove box. The evaporation studies of [ЕMIm]Cl were carried out by KEMS with using original research equipment based on commercial MI1201 magnetic mass spectrometer. The stainless steel effusion cell had an effective evaporation/effusion area ratio of more than 6000. The cell temperature, measured by a Pt/Pt−Rh (10%) thermocouple, was controlled by a Termodat 128K5 device with an accuracy of ±1 K. In first step of this study, the optimal temperature of experiment and heating rate of samples were customized: 449 K and 5 K/min, respectively. In these conditions the sample is decomposed, but the experimental measurements of the vapor pressures are possible. The thermodynamic activity of [ЕMIm]Cl is close to 1 and products of decomposition don’t affect it at firstly 50 hours of experiment. Therefore, it lets to determine the saturated vapor pressure of IL. The electronic ionization mass-spectra shows that the decomposition of [ЕMIm]Cl proceeds with two ways. Nonetheless, the MALDI mass spectra of the starting sample and residue in the cell were similar. It means that the main decomposition products are gaseous under experimental conditions. This result allows us to obtain information about the kinetics of [ЕMIm]Cl decomposition. Thus, the original KEMS-based procedure made it possible to determine the IL vapor pressure under decomposition conditions. Also, the loss of sample mass due to the evaporation was obtained.

Keywords: ionic liquids, Knudsen effusion mass spectrometry, thermal stability, vapor pressure

Procedia PDF Downloads 187
469 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing

Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska

Abstract:

Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.

Keywords: learning academic words, writing essays, cognitive load, english as an L2

Procedia PDF Downloads 73
468 Mapping the Turbulence Intensity and Excess Energy Available to Small Wind Systems over 4 Major UK Cities

Authors: Francis C. Emejeamara, Alison S. Tomlin, James Gooding

Abstract:

Due to the highly turbulent nature of urban air flows, and by virtue of the fact that turbines are likely to be located within the roughness sublayer of the urban boundary layer, proposed urban wind installations are faced with major challenges compared to rural installations. The challenge of operating within turbulent winds can however, be counteracted by the development of suitable gust tracking solutions. In order to assess the cost effectiveness of such controls, a detailed understanding of the urban wind resource, including its turbulent characteristics, is required. Estimating the ambient turbulence and total kinetic energy available at different control response times is essential in evaluating the potential performance of wind systems within the urban environment should effective control solutions be employed. However, high resolution wind measurements within the urban roughness sub-layer are uncommon, and detailed CFD modelling approaches are too computationally expensive to apply routinely on a city wide scale. This paper therefore presents an alternative semi-empirical methodology for estimating the excess energy content (EEC) present in the complex and gusty urban wind. An analytical methodology for predicting the total wind energy available at a potential turbine site is proposed by assessing the relationship between turbulence intensities and EEC, for different control response times. The semi-empirical model is then incorporated with an analytical methodology that was initially developed to predict mean wind speeds at various heights within the built environment based on detailed mapping of its aerodynamic characteristics. Based on the current methodology, additional estimates of turbulence intensities and EEC allow a more complete assessment of the available wind resource. The methodology is applied to 4 UK cities with results showing the potential of mapping turbulence intensities and the total wind energy available at different heights within each city. Considering the effect of ambient turbulence and choice of wind system, the wind resource over neighbourhood regions (of 250 m uniform resolution) and building rooftops within the 4 cities were assessed with results highlighting the promise of mapping potential turbine sites within each city.

Keywords: excess energy content, small-scale wind, turbulence intensity, urban wind energy, wind resource assessment

Procedia PDF Downloads 474
467 Experimental Study on Heat and Mass Transfer of Humidifier for Fuel Cell

Authors: You-Kai Jhang, Yang-Cheng Lu

Abstract:

Major contributions of this study are threefold: designing a new model of planar-membrane humidifier for Proton Exchange Membrane Fuel Cell (PEMFC), an index to measure the Effectiveness (εT) of that humidifier, and an air compressor system to replicate related planar-membrane humidifier experiments. PEMFC as a kind of renewable energy has become more and more important in recent years due to its reliability and durability. To maintain the efficiency of the fuel cell, the membrane of PEMFC need to be controlled in a good hydration condition. How to maintain proper membrane humidity is one of the key issues to optimize PEMFC. We developed new humidifier to recycle water vapor from cathode air outlet so as to keep the moisture content of cathode air inlet in a PEMFC. By measuring parameters such as dry side air outlet dew point temperature, dry side air inlet temperature and humidity, wet side air inlet temperature and humidity, and differential pressure between dry side and wet side, we calculated indices obtained by dew point approach temperature (DPAT), water flux (J), water recovery ratio (WRR), effectiveness (εT), and differential pressure (ΔP). We discussed six topics including sealing effect, flow rate effect, flow direction effect, channel effect, temperature effect, and humidity effect by using these indices. Gas cylinders are used as sources of air supply in many studies of humidifiers. Gas cylinder depletes quickly during experiment at 1kW air flow rate, and it causes replication difficult. In order to ensure high stable air quality and better replication of experimental data, this study designs an air supply system to overcome this difficulty. The experimental result shows that the best rate of pressure loss of humidifier is 0.133×10³ Pa(g)/min at the torque of 25 (N.m). The best humidifier performance ranges from 30-40 (LPM) of air flow rates. The counter flow configured humidifies moisturizes the dry side inlet air more effectively than the parallel flow humidifier. From the performance measurements of the channel plates various rib widths studied in this study, it is found that the narrower the rib width is, the more the performance of humidifier improves. Raising channel width in same hydraulic diameter (Dh ) will obtain higher εT and lower ΔP. Moreover, increasing the dry side air inlet temperature or humidity will lead to lower εT. In addition, when the dry side air inlet temperature exceeds 50°C, the effect becomes even more obvious.

Keywords: PEM fuel cell, water management, membrane humidifier, heat and mass transfer, humidifier performance

Procedia PDF Downloads 176