Search results for: subjective evaluation
425 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.Keywords: CT simulator, radiotherapy, quality control, QA programme
Procedia PDF Downloads 534424 Scale up of Isoniazid Preventive Therapy: A Quality Management Approach in Nairobi County, Kenya
Authors: E. Omanya, E. Mueni, G. Makau, M. Kariuki
Abstract:
HIV infection is the strongest risk factor for a person to develop TB. Isoniazid preventive therapy (IPT) for People Living with HIV (PLWHIV) not only reduces the individual patients’ risk of developing active TB but mitigates cross infection. In Kenya, IPT for six months was recommended through the National TB, Leprosy and Lung Disease Program to treat latent TB. In spite of this recommendation by the national government, uptake of IPT among PLHIV remained low in Kenya by the end of 2015. The USAID/Kenya and East Africa Afya Jijini project, which supports 42 TBHIV health facilities in Nairobi County, began addressing low uptake of IPT through Quality Improvement (QI) teams set up at the facility level. Quality is characterized by WHO as one of the four main connectors between health systems building blocks and health systems outputs. Afya Jijini implements the Kenya Quality Model for Health, which involves QI teams being formed at the county, sub-county and facility levels. The teams review facility performance to identify gaps in service delivery and use QI tools to monitor and improve performance. Afya Jijini supported the formation of these teams in 42 facilities and built the teams’ capacity to review data and use QI principles to identify and address performance gaps. When the QI teams began working on improving IPT uptake among PLHIV, uptake was at 31.8%. The teams first conducted a root cause analysis using cause and effect diagrams, which help the teams to brainstorm on and to identify barriers to IPT uptake among PLHIV at the facility level. This is a participatory process where program staff provides technical support to the QI teams in problem identification and problem-solving. The gaps identified were inadequate knowledge and skills on the use of IPT among health care workers, lack of awareness of IPT by patients, inadequate monitoring and evaluation tools, and poor quantification and forecasting of IPT commodities. In response, Afya Jijini trained over 300 health care workers on the administration of IPT, supported patient education, supported quantification and forecasting of IPT commodities, and provided IPT data collection tools to help facilities monitor their performance. The facility QI teams conducted monthly meetings to monitor progress on implementation of IPT and took corrective action when necessary. IPT uptake improved from 31.8% to 61.2% during the second year of the Afya Jijini project and improved to 80.1% during the third year of the project’s support. Use of QI teams and root cause analysis to identify and address service delivery gaps, in addition to targeted program interventions and continual performance reviews, can be successful in increasing TB related service delivery uptake at health facilities.Keywords: isoniazid, quality, health care workers, people leaving with HIV
Procedia PDF Downloads 99423 Teaching Academic Writing for Publication: A Liminal Threshold Experience Towards Development of Scholarly Identity
Authors: Belinda du Plooy, Ruth Albertyn, Christel Troskie-De Bruin, Ella Belcher
Abstract:
In the academy, scholarliness or intellectual craftsmanship is considered the highest level of achievement, culminating in being consistently successfully published in impactful, peer-reviewed journals and books. Scholarliness implies rigorous methods, systematic exposition, in-depth analysis and evaluation, and the highest level of critical engagement and reflexivity. However, being a scholar does not happen automatically when one becomes an academic or completes graduate studies. A graduate qualification is an indication of one’s level of research competence but does not necessarily prepare one for the type of scholarly writing for publication required after a postgraduate qualification has been conferred. Scholarly writing for publication requires a high-level skillset and a specific mindset, which must be intentionally developed. The rite of passage to become a scholar is an iterative process with liminal spaces, thresholds, transitions, and transformations. The journey from researcher to published author is often fraught with rejection, insecurity, and disappointment and requires resilience and tenacity from those who eventually triumph. It cannot be achieved without support, guidance, and mentorship. In this article, the authors use collective auto-ethnography (CAE) to describe the phases and types of liminality encountered during the liminal journey toward scholarship. The authors speak as long-time facilitators of Writing for Academic Publication (WfAP) capacity development events (training workshops and writing retreats) presented at South African universities. Their WfAP facilitation practice is structured around experiential learning principles that allow them to act as critical reading partners and reflective witnesses for the writer-participants of their WfAP events. They identify three essential facilitation features for the effective holding of a generative, liminal, and transformational writing space for novice academic writers in order to enable their safe passage through the various liminal spaces they encounter during their scholarly development journey. These features are that facilitators should be agents of disruption and liminality while also guiding writers through these liminal spaces; that there should be a sense of mutual trust and respect, shared responsibility and accountability in order for writers to produce publication-worthy scholarly work; and that this can only be accomplished with the continued application of high levels of sensitivity and discernment by WfAP facilitators. These are key features for successful WfAP scholarship training events, where focused, individual input triggers personal and professional transformational experiences, which in turn translate into high-quality scholarly outputs.Keywords: academic writing, liminality, scholarship, scholarliness, threshold experience, writing for publication
Procedia PDF Downloads 44422 Hybrid versus Cemented Fixation in Total Knee Arthroplasty: Mid-Term Follow-Up
Authors: Pedro Gomes, Luís Sá Castelo, António Lopes, Marta Maio, Pedro Mota, Adélia Avelar, António Marques Dias
Abstract:
Introduction: Total Knee Arthroplasty (TKA) has contributed to improvement of patient`s quality of life, although it has been associated with some complications including component loosening and polyethylene wear. To prevent these complications various fixation techniques have been employed. Hybrid TKA with cemented tibial and cementless femoral components have shown favourable outcomes, although it still lack of consensus in the literature. Objectives: To evaluate the clinical and radiographic results of hybrid versus cemented TKA with an average 5 years follow-up and analyse the survival rates. Methods: A retrospective study of 125 TKAs performed in 92 patients at our institution, between 2006 to 2008, with a minimum follow-up of 2 years. The same prosthesis was used in all knees. Hybrid TKA fixation was performed in 96 knees, with a mean follow-up of 4,8±1,7 years (range, 2–8,3 years) and 29 TKAs received fully cemented fixation with a mean follow-up of 4,9±1,9 years (range, 2-8,3 years). Selection for hybrid fixation was nonrandomized and based on femoral component fit. The Oxford Knee Score (OKS 0-48) was evaluated for clinical assessment and Knee Society Roentgenographic Evaluation Scoring System was used for radiographic outcome. The survival rate was calculated using the Kaplan-Meier method, with failures defined as revision of either the tibial or femoral component for aseptic failures and all-causes (aseptic and infection). Analysis of survivorship data was performed using the log-rank test. SPSS (v22) was the computer program used for statistical analysis. Results: The hybrid group consisted of 72 females (75%) and 24 males (25%), with mean age 64±7 years (range, 50-78 years). The preoperative diagnosis was osteoarthritis (OA) in 94 knees (98%), rheumatoid arthritis (RA) in 1 knee (1%) and Posttraumatic arthritis (PTA) in 1 Knee (1%). The fully cemented group consisted of 23 females (79%) and 6 males (21%), with mean age 65±7 years (range, 47-78 years). The preoperative diagnosis was OA in 27 knees (93%), PTA in 2 knees (7%). The Oxford Knee Scores were similar between the 2 groups (hybrid 40,3±2,8 versus cemented 40,2±3). The percentage of radiolucencies seen on the femoral side was slightly higher in the cemented group 20,7% than the hybrid group 11,5% p0.223. In the cemented group there were significantly more Zone 4 radiolucencies compared to the hybrid group (13,8% versus 2,1% p0,026). Revisions for all causes were performed in 4 of the 96 hybrid TKAs (4,2%) and 1 of the 29 cemented TKAs (3,5%). The reason for revision was aseptic loosening in 3 hybrid TKAs and 1 of the cemented TKAs. Revision was performed for infection in 1 hybrid TKA. The hybrid group demonstrated a 7 years survival rate of 93% for all-cause failures and 94% for aseptic loosening. No significant difference in survivorship was seen between the groups for all-cause failures or aseptic failures. Conclusions: Hybrid TKA yields similar intermediate-term results and survival rates as fully cemented total knee arthroplasty and remains a viable option in knee joint replacement surgery.Keywords: hybrid, survival rate, total knee arthroplasty, orthopaedic surgery
Procedia PDF Downloads 594421 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region
Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho
Abstract:
The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon
Procedia PDF Downloads 66420 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data
Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour
Abstract:
Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.Keywords: physical activity, machine learning, under 5s, disability, accelerometer
Procedia PDF Downloads 210419 Applications of Digital Tools, Satellite Images and Geographic Information Systems in Data Collection of Greenhouses in Guatemala
Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.
Abstract:
During the last 20 years, the globalization of economies, population growth, and the increase in the consumption of fresh agricultural products have generated greater demand for ornamentals, flowers, fresh fruits, and vegetables, mainly from tropical areas. This market situation has demanded greater competitiveness and control over production, with more efficient protected agriculture technologies, which provide greater productivity and allow us to guarantee the quality and quantity that is required in a constant and sustainable way. Guatemala, located in the north of Central America, is one of the largest exporters of agricultural products in the region and exports fresh vegetables, flowers, fruits, ornamental plants, and foliage, most of which were grown in greenhouses. Although there are no official agricultural statistics on greenhouse production, several thesis works, and congress reports have presented consistent estimates. A wide range of protection structures and roofing materials are used, from the most basic and simple ones for rain control to highly technical and automated structures connected with remote sensors for monitoring and control of crops. With this breadth of technological models, it is necessary to analyze georeferenced data related to the cultivated area, to the different existing models, and to the covering materials, integrated with altitude, climate, and soil data. The georeferenced registration of the production units, the data collection with digital tools, the use of satellite images, and geographic information systems (GIS) provide reliable tools to elaborate more complete, agile, and dynamic information maps. This study details a methodology proposed for gathering georeferenced data of high protection structures (greenhouses) in Guatemala, structured in four phases: diagnosis of available information, the definition of the geographic frame, selection of satellite images, and integration with an information system geographic (GIS). It especially takes account of the actual lack of complete data in order to obtain a reliable decision-making system; this gap is solved through the proposed methodology. A summary of the results is presented in each phase, and finally, an evaluation with some improvements and tentative recommendations for further research is added. The main contribution of this study is to propose a methodology that allows to reduce the gap of georeferenced data in protected agriculture in this specific area where data is not generally available and to provide data of better quality, traceability, accuracy, and certainty for the strategic agricultural decision öaking, applicable to other crops, production models and similar/neighboring geographic areas.Keywords: greenhouses, protected agriculture, GIS, Guatemala, satellite image, digital tools, precision agriculture
Procedia PDF Downloads 194418 The Lonely Entrepreneur: Antecedents and Effects of Social Isolation on Entrepreneurial Intention and Output
Authors: Susie Pryor, Palak Sadhwani
Abstract:
The purpose of this research is to provide the foundations for a broad research agenda examining the role loneliness plays in entrepreneurship. While qualitative research in entrepreneurship incidentally captures the existence of loneliness as a part of the lived reality of entrepreneurs, to the authors’ knowledge, no academic work has to date explored this construct in this context. Moreover, many individuals reporting high levels of loneliness (women, ethnic minorities, immigrants, low income, low education) reflect those who are currently driving small business growth in the United States. Loneliness is a persistent state of emotional distress which results from feelings of estrangement and rejection or develops in the absence of social relationships and interactions. Empirical work finds links between loneliness and depression, suicide and suicide ideation, anxiety, hostility and passiveness, lack of communication and adaptability, shyness, poor social skills and unrealistic social perceptions, self-doubts, fear of rejection, and negative self-evaluation. Lonely individuals have been found to exhibit lower levels of self-esteem, higher levels of introversion, lower affiliative tendencies, less assertiveness, higher sensitivity to rejection, a heightened external locus of control, intensified feelings of regret and guilt over past events and rigid and overly idealistic goals concerning the future. These characteristics are likely to impact entrepreneurs and their work. Research identifies some key dangers of loneliness. Loneliness damages human love and intimacy, can disturb and distract individuals from channeling creative and effective energies in a meaningful way, may result in the formation of premature, poorly thought out and at times even irresponsible decisions, and produce hard and desensitized individuals, with compromised health and quality of life concerns. The current study utilizes meta-analysis and text analytics to distinguish loneliness from other related constructs (e.g., social isolation) and categorize antecedents and effects of loneliness across subpopulations. This work has the potential to materially contribute to the field of entrepreneurship by cleanly defining constructs and providing foundational background for future research. It offers a richer understanding of the evolution of loneliness and related constructs over the life cycle of entrepreneurial start-up and development. Further, it suggests preliminary avenues for exploration and methods of discovery that will result in knowledge useful to the field of entrepreneurship. It is useful to both entrepreneurs and those work with them as well as academics interested in the topics of loneliness and entrepreneurship. It adopts a grounded theory approach.Keywords: entrepreneurship, grounded theory, loneliness, meta-analysis
Procedia PDF Downloads 112417 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group
Authors: Antonio Martín-Pardo
Abstract:
With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation
Procedia PDF Downloads 119416 Evaluation of Ocular Changes in Hypertensive Disorders of Pregnancy
Authors: Rajender Singh, Nidhi Sharma, Aastha Chauhan, Meenakshi Barsaul, Jyoti Deswal, Chetan Chhikara
Abstract:
Introduction: Pre-eclampsia and eclampsia are hypertensive disorders of pregnancy with multisystem involvement and are common causes of morbidity and mortality in obstetrics. It is believed that changes in retinal arterioles may indicate similar changes in the placenta. Therefore, this study was undertaken to evaluate the ocular manifestations in cases of pre-eclampsia and eclampsia and to deduce any association between the retinal changes and blood pressure, the severity of disease, gravidity, proteinuria, and other lab parameters so that a better approach could be devised to ensure maternal and fetal well-being. Materials and Methods: This was a hospital-based cross-sectional study conducted over a period of one year, from April 2021 to May 2022. 350 admitted patients with diagnosed pre-eclampsia, eclampsia, and pre-eclampsia superimposed on chronic hypertension were included in the study. A pre-structured proforma was used. After taking consent and ocular history, a bedside examination to record visual acuity, pupillary size, corneal curvature, field of vision, and intraocular pressure was done. Dilated fundus examination was done with a direct and indirect ophthalmoscope. Age, parity, BP, proteinuria, platelet count, liver and kidney function tests were noted down. The patients with positive findings only were followed up after 72 hours and 6 weeks of termination of pregnancy. Results: The mean age of patients was 26.18±4.33 years (range 18-39 years).157 (44.9%) were primigravida while 193(55.1%) were multigravida.53 (15.1%) patients had eclampsia, 128(36.5%) had mild pre-eclampsia,128(36.5%) had severe pre-eclampsia and 41(11.7%) had chronic hypertension with superimposed pre-eclampsia. Retinal changes were found in 208 patients (59.42%), and grade I changes were the most common. 82(23.14%) patients had grade I changes, 75 (21.4%) had grade II changes, 41(11.71%) had grade III changes, and 11(3.14%) had serous retinal detachment/grade IV changes. 36 patients had unaided visual acuity <6/9, of these 17 had refractive error and 19(5.4%) had varying degrees of retinal changes. 3(0.85%) out of 350 patients had an abnormal field of vision in both eyes. All 3 of them had eclampsia and bilateral exudative retinal detachment. At day 4, retinopathy in 10 patients resolved, and 3 patients had improvement in visual acuity. At 6 weeks, retinopathy in all the patients resolved spontaneously except persistence of grade II changes in 23 patients with chronic hypertension with superimposed pre-eclampsia, while visual acuity and field of vision returned to normal in all patients. Pupillary size, intraocular pressure, and corneal curvature were found to be within normal limits at all times of examination. There was a statistically significant positive association between retinal changes and mean arterial pressure. The study showed a positive correlation between fundus findings and severity of disease (p value<0.05) and mean arterial pressure (p value<0.005). Primigravida had more retinal changes than multigravida patients. A significant association was found between fundus changes and thrombocytopenia and deranged liver and kidney function tests (p value<0.005). Conclusion: As the severity of pre-eclampsia and eclampsia increases, the incidence of retinopathy also increases, and it affects visual acuity and visual fields of the patients. Thus, timely ocular examination should be done in all such cases to prevent complications.Keywords: eclampsia, hypertensive, ocular, pre-eclampsia
Procedia PDF Downloads 78415 Analyzing Concrete Structures by Using Laser Induced Breakdown Spectroscopy
Authors: Nina Sankat, Gerd Wilsch, Cassian Gottlieb, Steven Millar, Tobias Guenther
Abstract:
Laser-Induced Breakdown Spectroscopy (LIBS) is a combination of laser ablation and optical emission spectroscopy, which in principle can simultaneously analyze all elements on the periodic table. Materials can be analyzed in terms of chemical composition in a two-dimensional, time efficient and minor destructive manner. These advantages predestine LIBS as a monitoring technique in the field of civil engineering. The decreasing service life of concrete infrastructures is a continuously growing problematic. A variety of intruding, harmful substances can damage the reinforcement or the concrete itself. To insure a sufficient service life a regular monitoring of the structure is necessary. LIBS offers many applications to accomplish a successful examination of the conditions of concrete structures. A selection of those applications are the 2D-evaluation of chlorine-, sodium- and sulfur-concentration, the identification of carbonation depths and the representation of the heterogeneity of concrete. LIBS obtains this information by using a pulsed laser with a short pulse length (some mJ), which is focused on the surfaces of the analyzed specimen, for this only an optical access is needed. Because of the high power density (some GW/cm²) a minimal amount of material is vaporized and transformed into a plasma. This plasma emits light depending on the chemical composition of the vaporized material. By analyzing the emitted light, information for every measurement point is gained. The chemical composition of the scanned area is visualized in a 2D-map with spatial resolutions up to 0.1 mm x 0.1 mm. Those 2D-maps can be converted into classic depth profiles, as typically seen for the results of chloride concentration provided by chemical analysis like potentiometric titration. However, the 2D-visualization offers many advantages like illustrating chlorine carrying cracks, direct imaging of the carbonation depth and in general allowing the separation of the aggregates from the cement paste. By calibrating the LIBS-System, not only qualitative but quantitative results can be obtained. Those quantitative results can also be based on the cement paste, while excluding the aggregates. An additional advantage of LIBS is its mobility. By using the mobile system, located at BAM, onsite measurements are feasible. The mobile LIBS-system was already used to obtain chloride, sodium and sulfur concentrations onsite of parking decks, bridges and sewage treatment plants even under hard conditions like ongoing construction work or rough weather. All those prospects make LIBS a promising method to secure the integrity of infrastructures in a sustainable manner.Keywords: concrete, damage assessment, harmful substances, LIBS
Procedia PDF Downloads 176414 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants
Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji
Abstract:
Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization
Procedia PDF Downloads 129413 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 141412 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel system is used to mix up low or high viscosities of different fluids, and the laminar flow with high-viscous water-glycerol fluids makes the mixing at the entrance Y-junction region a challenging issue. Acoustic streaming (AS) is time-average, a steady second-order flow phenomenon that could produce rolling motion in the microchannel by oscillating low-frequency range acoustic transducer by inducing acoustic wave in the flow field is the promising strategy to enhance diffusion mass transfer and mixing performance in laminar flow phenomena. In this study, the 3D trapezoidal Structure has been manufactured with advanced CNC machine cutting tools to produce the molds of trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm spine sharp-edge tip depth from PMMA glass (Polymethylmethacrylate) and the microchannel has been fabricated using PDMS (Polydimethylsiloxane) which could be grown-up longitudinally in Y-junction microchannel mixing region top surface to visualized 3D rolling steady acoustic streaming and mixing performance evaluation using high-viscous miscible fluids. The 3D acoustic streaming flow patterns and mixing enhancement were investigated using the micro-particle image velocimetry (μPIV) technique with different spine depth lengths, channel widths, high volume flow rates, oscillation frequencies, and amplitude. The velocity and vorticity flow fields show that a pair of 3D counter-rotating streaming vortices were created around the trapezoidal spine structure and observing high vorticity maps up to 8 times more than the case without acoustic streaming in Y-junction with the high-viscosity water-glycerol mixture fluids. The mixing experiments were performed by using fluorescent green dye solution with de-ionized water on one inlet side, de-ionized water-glycerol with different mass-weight percentage ratios on the other inlet side of the Y-channel and evaluated its performance with the degree of mixing at different amplitudes, flow rates, frequencies, and spine sharp-tip edge angles using the grayscale value of pixel intensity with MATLAB Software. The degree of mixing (M) characterized was found to significantly improved to 0.96.8% with acoustic streaming from 67.42% without acoustic streaming, in the case of 0.0986 μl/min flow rate, 12kHz frequency and 40V oscillation amplitude at y = 2.26 mm. The results suggested the creation of a new 3D steady streaming rolling motion with a high volume flow rate around the entrance junction mixing region, which promotes the mixing of two similar high-viscosity fluids inside the microchannel, which is unable to mix by the laminar flow with low viscous conditions.Keywords: nano fabrication, 3D acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 33411 An Unusual Case of Wrist Pain: Idiopathic Avascular Necrosis of the Scaphoid, Preiser’s Disease
Authors: Adae Amoako, Daniel Montero, Peter Murray, George Pujalte
Abstract:
We present a case of a 42-year-old, right-handed Caucasian male who presented to a medical orthopedics clinic with left wrist pain. The patient indicated that the pain started two months prior to the visit. He could only remember helping a friend move furniture prior to the onset of pain. Examination of the left wrist showed limited extension compared to the right. There was clicking with flexion and extension of the wrist on the dorsal aspect. Mild tenderness was noticed over the distal radioulnar joint. There was ulnar and radial deviation on provocation. Initial 4-view x-rays of the left wrist showed mild radiocarpal and scapho-trapezium-trapezoid (ST-T) osteoarthritis, with subchondral cysts seen in the lunate and scaphoid, with no obvious fractures. The patient was initially put in a wrist brace and diclofenac topical gel was prescribed for pain control, as a patient could not take non-steroidal anti-inflammatory drugs (NSAIDs) due to gastritis. Despite diclofenac topical gel use and bracing, symptoms remained, and a steroid injection with 1 mL of lidocaine with 10 mg of triamcinolone acetonide was performed under fluoroscopy. He obtained some relief but after 3 months, the injection had to be repeated. On 2-month follow up after the initial evaluation, symptoms persisted. Magnetic resonance imaging (MRI) was obtained which showed an abnormal T1 hypodense signal involving the proximal pole of the scaphoid and articular collapse proximally of the scaphoid, with marked irregularity of the overlying cartilage, suggesting a remote injury, findings consistent with avascular necrosis of the proximal pole of the scaphoid. A month after that, the patient had the left proximal pole of the scaphoid debrided and an intercompartmental supraretinacular artery vascularized. Pedicle bone graft reconstruction of the proximal pole of the left scaphoid was done. A non-vascularized autograft from the left radius was also applied. He was put in a thumb spica cast with the interphalangeal joint free for 6 weeks. On 6-week follow-up after surgery, the patient was healing well and could make a composite fist with his left hand. The diagnosis of Preiser’s disease is primarily based on radiological findings. Due to the fact that necrosis happens over a period of time, most AVNs are diagnosed at the late stages of the disease. There appear to be no specific guidelines on the management AVN of the scaphoid. In the past, immobilization and arthroscopic debridement had been used. Radial osteotomy has also been tried. Vascularized bone grafts have also been used to treat Preiser’s disease. In our patient, we used three of these treatment modalities, starting with conservative management with topical NSAIDS and immobilization, then debridement with vascularized bone grafts.Keywords: wrist pain, avascular necrosis of the scaphoid, Preiser’s disease, vascularized bone grafts
Procedia PDF Downloads 295410 Numerical Investigation of Combustion Chamber Geometry on Combustion Performance and Pollutant Emissions in an Ammonia-Diesel Common Rail Dual-Fuel Engine
Authors: Youcef Sehili, Khaled Loubar, Lyes Tarabet, Mahfoudh Cerdoun, Clement Lacroix
Abstract:
As emissions regulations grow more stringent and traditional fuel sources become increasingly scarce, incorporating carbon-free fuels in the transportation sector emerges as a key strategy for mitigating the impact of greenhouse gas emissions. While the utilization of hydrogen (H2) presents significant technological challenges, as evident in the engine limitation known as knocking, ammonia (NH3) provides a viable alternative that overcomes this obstacle and offers convenient transportation, storage, and distribution. Moreover, the implementation of a dual-fuel engine using ammonia as the primary gas is promising, delivering both ecological and economic benefits. However, when employing this combustion mode, the substitution of ammonia at high rates adversely affects combustion performance and leads to elevated emissions of unburnt NH3, especially under high loads, which requires special treatment of this mode of combustion. This study aims to simulate combustion in a common rail direct injection (CRDI) dual-fuel engine, considering the fundamental geometry of the combustion chamber as well as fifteen (15) alternative proposed geometries to determine the configuration that exhibits superior engine performance during high-load conditions. The research presented here focuses on improving the understanding of the equations and mechanisms involved in the combustion of finely atomized jets of liquid fuel and on mastering the CONVERGETM code, which facilitates the simulation of this combustion process. By analyzing the effect of piston bowl shape on the performance and emissions of a diesel engine operating in dual fuel mode, this work combines knowledge of combustion phenomena with proficiency in the calculation code. To select the optimal geometry, an evaluation of the Swirl, Tumble, and Squish flow patterns was conducted for the fifteen (15) studied geometries. Variations in-cylinder pressure, heat release rate, turbulence kinetic energy, turbulence dissipation rate, and emission rates were observed, while thermal efficiency and specific fuel consumption were estimated as functions of crankshaft angle. To maximize thermal efficiency, a synergistic approach involving the enrichment of intake air with oxygen (O2) and the enrichment of primary fuel with hydrogen (H2) was implemented. Based on the results obtained, it is worth noting that the proposed geometry (T8_b8_d0.6/SW_8.0) outperformed the others in terms of flow quality, reduction of pollutants emitted with a reduction of more than 90% in unburnt NH3, and an impressive improvement in engine efficiency of more than 11%.Keywords: ammonia, hydrogen, combustion, dual-fuel engine, emissions
Procedia PDF Downloads 74409 Effect of Phenolic Acids on Human Saliva: Evaluation by Diffusion and Precipitation Assays on Cellulose Membranes
Authors: E. Obreque-Slier, F. Orellana-Rodríguez, R. López-Solís
Abstract:
Phenolic compounds are secondary metabolites present in some foods, such as wine. Polyphenols comprise two main groups: flavonoids (anthocyanins, flavanols, and flavonols) and non-flavonoids (stilbenes and phenolic acids). Phenolic acids are low molecular weight non flavonoid compounds that are usually grouped into benzoic (gallic, vanillinic and protocatechuic acids) and cinnamic acids (ferulic, p-coumaric and caffeic acids). Likewise, tannic acid is an important polyphenol constituted mainly by gallic acid. Phenolic compounds are responsible for important properties in foods and drinks, such as color, aroma, bitterness, and astringency. Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. Astringency perception has been associated with interactions between flavanols present in some foods and salivary proteins. Despite the quantitative relevance of phenolic acids in food and beverages, there is no information about its effect on salivary proteins and consequently on the sensation of astringency. The objective of this study was assessed the interaction of several phenolic acids (gallic, vanillinic, protocatechuic, ferulic, p-coumaric and caffeic acids) with saliva. Tannic acid was used as control. Thus, solutions of each phenolic acids (5 mg/mL) were mixed with human saliva (1:1 v/v). After incubation for 5 min at room temperature, 15-μL aliquots of the mixtures were dotted on a cellulose membrane and allowed to diffuse. The dry membrane was fixed in 50 g/L trichloroacetic acid, rinsed in 800 mL/L ethanol and stained for protein with Coomassie blue for 20 min, destained with several rinses of 73 g/L acetic acid and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged and fifteen-μL aliquots of each supernatant were dotted on a cellulose membrane, allowed to diffuse and processed for protein staining, as indicated above. In this latter assay, reduced protein staining was taken as indicative of protein precipitation (precipitation test). The diffusion of the salivary protein was restricted by the presence of each phenolic acids (anti-diffusive effect), while tannic acid did not alter diffusion of the salivary protein. By contrast, phenolic acids did not provoke precipitation of the salivary protein, while tannic acid produced precipitation of salivary proteins. In addition, binary mixtures (mixtures of two components) of various phenolic acids with gallic acid provoked a restriction of saliva. Similar effect was observed by the corresponding individual phenolic acids. Contrary, binary mixtures of phenolic acid with tannic acid, as well tannic acid alone, did not affect the diffusion of the saliva but they provoked an evident precipitation. In summary, phenolic acids showed a relevant interaction with the salivary proteins, thus suggesting that these wine compounds can also contribute to the sensation of astringency.Keywords: astringency, polyphenols, tannins, tannin-protein interaction
Procedia PDF Downloads 246408 Ganga Rejuvenation through Forestation and Conservation Measures in Riverscape
Authors: Ombir Singh
Abstract:
In spite of the religious and cultural pre-dominance of the river Ganga in the Indian ethos, fragmentation and degradation of the river continued down the ages. Recognizing the national concern on environmental degradation of the river and its basin, Ministry of Water Resources, River Development & Ganga Rejuvenation (MoWR,RD&GR), Government of India has initiated a number of pilot schemes for the rejuvenation of river Ganga under the ‘Namami Gange’ Programme. Considering the diversity, complexity, and intricacies of forest ecosystems and pivotal multiple functions performed by them and their inter-connectedness with highly dynamic river ecosystems, forestry interventions all along the river Ganga from its origin at Gaumukh, Uttarakhand to its mouth at Ganga Sagar, West Bengal has been planned by the ministry. For that Forest Research Institute (FRI) in collaboration with National Mission for Clean Ganga (NMCG) has prepared a Detailed Project Report (DPR) on Forestry Interventions for Ganga. The Institute has adopted an extensive consultative process at the national and state levels involving various stakeholders relevant in the context of river Ganga and employed a science-based methodology including use of remote sensing and GIS technologies for geo-spatial analysis, modeling and prioritization of sites for proposed forestation and conservation interventions. Four sets of field data formats were designed to obtain the field based information for forestry interventions, mainly plantations and conservation measures along the river course. In response, five stakeholder State Forest Departments had submitted more than 8,000 data sheets to the Institute. In order to analyze a voluminous field data received from five participating states, the Institute also developed a software to collate, analyze and generation of reports on proposed sites in Ganga basin. FRI has developed potential plantation and treatment models for the proposed forestry and other conservation measures in major three types of landscape components visualized in the Ganga riverscape. These are: (i) Natural, (ii) Agriculture, and (iii) Urban Landscapes. Suggested plantation models broadly varied for the Uttarakhand Himalayas and the Ganga Plains in five participating states. Besides extensive plantations in three type of landscapes within the riverscape, various conservation measures such as soil and water conservation, riparian wildlife management, wetland management, bioremediation and bio-filtration and supporting activities such as policy and law intervention, concurrent research, monitoring and evaluation, and mass awareness campaigns have been envisioned in the DPR. The DPR also incorporates the details of the implementation mechanism, budget provisioned for different components of the project besides allocation of budget state-wise to five implementing agencies, national partner organizations and the Nodal Ministry.Keywords: conservation, Ganga, river, water, forestry interventions
Procedia PDF Downloads 149407 Molecular Dynamics Simulation Study of the Influence of Potassium Salts on the Adsorption and Surface Hydration Inhibition Performance of Hexane, 1,6 - Diamine Clay Mineral Inhibitor onto Sodium Montmorillonite
Authors: Justine Kiiza, Xu Jiafang
Abstract:
The world’s demand for energy is increasing rapidly due to population growth and a reduction in shallow conventional oil and gas reservoirs, resorting to deeper and mostly unconventional reserves like shale oil and gas. Most shale formations contain a large amount of expansive sodium montmorillonite (Na-Mnt), due to high water adsorption, hydration, and when the drilling fluid filtrate enters the formation with high Mnt content, the wellbore wall can be unstable due to hydration and swelling, resulting to shrinkage, sticking, balling, time wasting etc., and well collapse in extreme cases causing complex downhole accidents and high well costs. Recently, polyamines like 1, 6 – hexane diamine (HEDA) have been used as typical drilling fluid shale inhibitors to minimize and/or cab clay mineral swelling and maintain the wellbore stability. However, their application is limited to shallow drilling due to their sensitivity to elevated temperature and pressure. Inorganic potassium salts i.e., KCl, have long been applied for restriction of shale formation hydration expansion in deep wells, but their use is limited due to toxicity. Understanding the adsorption behaviour of HEDA on Na-Mnt surfaces in present of organo-salts, organic K-salts e.g., HCO₂K - main component of organo-salt drilling fluid, is of great significance in explaining the inhibitory performance of polyamine inhibitors. Molecular dynamic simulations (MD) were applied to investigate the influence of HCO₂K and KCl on the adsorption mechanism of HEDA on the Na-Mnt surface. Simulation results showed that adsorption configurations of HEDA are mainly by terminal amine groups with a flat-lying alkyl hydrophobic chain. Its interaction with the clay surface decreased the H-bond number between H₂O-clay and neutralized the negative charge of the Mnt surface, thus weakening the surface hydration ability of Na-Mnt. The introduction of HCO₂K greatly improved inhibition ability, coordination of interlayer ions with H₂O as they were replaced by K+, and H₂O-HCOO- coordination reduced H₂O-Mnt interactions, mobility and transport capability of H₂O molecules were more decreased. While KCl showed little ability and also caused more hydration with time, HCO₂K can be used as an alternative for offshore drilling instead of toxic KCl, with a maximum concentration noted in this study as 1.65 wt%. This study provides a theoretical elucidation for the inhibition mechanism and adsorption characteristics of HEDA inhibitor on Na-Mnt surfaces in the presence of K+-salts and may provide more insight into the evaluation, selection, and molecular design of new clay-swelling high-performance WBDF systems used in oil and gas complex offshore drilling well sections.Keywords: shale, hydration, inhibition, polyamines, organo-salts, simulation
Procedia PDF Downloads 48406 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator
Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani
Abstract:
During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA
Procedia PDF Downloads 186405 Life Cycle Assessment Applied to Supermarket Refrigeration System: Effects of Location and Choice of Architecture
Authors: Yasmine Salehy, Yann Leroy, Francois Cluzel, Hong-Minh Hoang, Laurence Fournaison, Anthony Delahaye, Bernard Yannou
Abstract:
Taking into consideration all the life cycle of a product is now an important step in the eco-design of a product or a technology. Life cycle assessment (LCA) is a standard tool to evaluate the environmental impacts of a system or a process. Despite the improvement in refrigerant regulation through protocols, the environmental damage of refrigeration systems remains important and needs to be improved. In this paper, the environmental impacts of refrigeration systems in a typical supermarket are compared using the LCA methodology under different conditions. The system is used to provide cold at two levels of temperature: medium and low temperature during a life period of 15 years. The most commonly used architectures of supermarket cold production systems are investigated: centralized direct expansion systems and indirect systems using a secondary loop to transport the cold. The variation of power needed during seasonal changes and during the daily opening/closure periods of the supermarket are considered. R134a as the primary refrigerant fluid and two types of secondary fluids are considered. The composition of each system and the leakage rate of the refrigerant through its life cycle are taken from the literature and industrial data. Twelve scenarios are examined. They are based on the variation of three parameters, 1. location: France (Paris), Spain (Toledo) and Sweden (Stockholm), 2. different sources of electric consumption: photovoltaic panels and low voltage electric network and 3. architecture: direct and indirect refrigeration systems. OpenLCA, SimaPro softwares, and different impact assessment methods were compared; CML method is used to evaluate the midpoint environmental indicators. This study highlights the significant contribution of electric consumption in environmental damages compared to the impacts of refrigerant leakage. The secondary loop allows lowering the refrigerant amount in the primary loop which results in a decrease in the climate change indicators compared to the centralized direct systems. However, an exhaustive cost evaluation (CAPEX and OPEX) of both systems shows more important costs related to the indirect systems. A significant difference between the countries has been noticed, mostly due to the difference in electric production. In Spain, using photovoltaic panels helps to reduce efficiently the environmental impacts and the related costs. This scenario is the best alternative compared to the other scenarios. Sweden is a country with less environmental impacts. For both France and Sweden, the use of photovoltaic panels does not bring a significant difference, due to a less sunlight exposition than in Spain. Alternative solutions exist to reduce the impact of refrigerating systems, and a brief introduction is presented.Keywords: eco-design, industrial engineering, LCA, refrigeration system
Procedia PDF Downloads 189404 Evaluation of Some Serum Proteins as Markers for Myeloma Bone Disease
Authors: V. T. Gerov, D. I. Gerova, I. D. Micheva, N. F. Nazifova-Tasinova, M. N. Nikolova, M. G. Pasheva, B. T. Galunska
Abstract:
Multiple myeloma (MM) is the most frequent plasma cell (PC) dyscrasia that involves the skeleton. Myeloma bone disease (MBD) is characterized by osteolytic bone lesions as a result of increased osteoclasts activity not followed by reactive bone formation due to osteoblasts suppression. Skeletal complications cause significant adverse effects on quality of life and lead to increased morbidity and mortality. Last decade studies revealed the implication of different proteins in osteoclast activation and osteoblast inhibition. The aim of the present study was to determine serum levels of periostin, sRANKL and osteopontin and to evaluate their role as bone markers in MBD. Materials and methods. Thirty-two newly diagnosed MM patients (mean age: 62.2 ± 10.7 years) and 33 healthy controls (mean age: 58.9 ± 7.5 years) were enrolled in the study. According to IMWG criteria 28 patients were with symptomatic MM and 4 with monoclonal gammopathy of undetermined significance (MGUS). In respect to their bone involvement all symptomatic patients were divided into two groups (G): 9 patients with 0-3 osteolytic lesions (G1) and 19 patients with >3 osteolytic lesions and/or pathologic fractures (G2). Blood samples were drawn for routine laboratory analysis and for measurement of periostin, sRANKL and osteopontin serum levels by ELISA kits (Shanghai Sunred Biological Technology, China). Descriptive analysis, Mann-Whitney test for assessment the differences between groups and non-parametric correlation analysis were performed using GraphPad Prism v8.01. Results. The median serum levels of periostin, sRANKL and osteopontin of ММ patients were significantly higher compared to controls (554.7pg/ml (IQR=424.0-720.6) vs 396.9pg/ml (IQR=308.6-471.9), p=0.0001; 8.9pg/ml (IQR=7.1-10.5) vs 5.6pg/ml (IQR=5.1-6.4, p<0.0001 and 514.0ng/ml (IQR=469.3-754.0) vs 387.0ng/ml (IQR=335.9-441.9), p<0.0001, respectively). for assessment of differences between groups and non-parametric correlation analysis were performed using GraphPad Prism v8.01. Statistical significance was found for all tested bone markers between symptomatic MM patients and controls: G1 vs controls (p<0.03), G2 vs controls (p<0.0001) for periostin; G1 vs controls (p<0.0001), G2 vs controls (p<0.0001) for sRANKL; G1 vs controls (p=0.002), G2 vs controls (p<0.0001) for osteopontin, as well between symptomatic MM patients and MGUS patients: G1 vs MGUS (p<0.003), G2 vs MGUS (p=0.003) for periostin; G1 vs MGUS (p<0.05), G2 vs MGUS (p<0.001) for sRANKL; G1 vs MGUS (p=0.011), G2 vs MGUS (p=0.0001) for osteopontin. No differences were detected between MGUS and controls and between patients in G1 and G2 groups. Spearman correlation analysis revealed moderate positive correlation between periostin and beta-2-microglobulin (r=0.416, p=0.018), percentage bone marrow myeloma PC (r=0.432, p=0.014), and serum total protein (r=0.427, p=0.015). Osteopontin levels were also positively related to beta-2-microglobulin (r=0.540, p=0.0014), percentage bone marrow myeloma PC (r=0.423, p=0.016), and serum total protein (r=0.413, p=0.019). Serum sRANKL was only related to beta-2-microglobulin levels (r=0.398, p=0.024). Conclusion: In the present study, serum levels of periostin, sRANKL and osteopontin in newly diagnosed MM patients were evaluated. They gradually increased from MGUS to more advanced stages of MM reflecting the severity of bone destruction. These results support the idea that some new protein markers could be used in monitoring the MBD as a most severe complication of MM.Keywords: myeloma bone disease, periostin, sRANKL, osteopontin
Procedia PDF Downloads 57403 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm
Abstract:
Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension
Procedia PDF Downloads 100402 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test
Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi
Abstract:
Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde
Procedia PDF Downloads 103401 Interpersonal Competence Related to the Practice Learning of Occupational Therapy Students in Hong Kong
Authors: Lik Hang Gary Wong
Abstract:
Background: Practice learning is crucial for preparing the healthcare profession to meet the real challenge upon graduation. Students are required to demonstrate their competence in managing interpersonal challenges, such as teamwork with other professionals and communicating well with the service users, during the placement. Such competence precedes clinical practice, and it may eventually affect students' actual performance in a clinical context. Unfortunately, there were limited studies investigating how such competence affects students' performance in practice learning. Objectives: The aim of this study is to investigate how self-rated interpersonal competence affects students' actual performance during clinical placement. Methods: 40 occupational therapy students from Hong Kong were recruited in this study. Prior to the clinical placement (level two or above), they completed an online survey that included the Interpersonal Communication Competence Scale (ICCS) measuring self-perceived competence in interpersonal communication. Near the end of their placement, the clinical educator rated students’ performance with the Student Practice Evaluation Form - Revised edition (SPEF-R). The SPEF-R measures the eight core competency domains required for an entry-level occupational therapist. This study adopted the cross-sectional observational design. Pearson correlation and multiple regression are conducted to examine the relationship between students' interpersonal communication competence and their actual performance in clinical placement. Results: The ICCS total scores were significantly correlated with all the SPEF-R domains, with correlation coefficient r ranging from 0.39 to 0.51. The strongest association was found with the co-worker communication domain (r = 0.51, p < 0.01), followed by the information gathering domain (r = 0.50, p < 0.01). Regarding the ICCS total scores as the independent variable and the rating in various SPEF-R domains as the dependent variables in the multiple regression analyses, the interpersonal competence measures were identified as a significant predictor of the co-worker communication (R² = 0.33, β = 0.014, SE = 0.006, p = 0.026), information gathering (R² = 0.27, β = 0.018, SE = 0.007, p = 0.011), and service provision (R² = 0.17, β = 0.017, SE = 0.007, p = 0.020). Moreover, some specific communication skills appeared to be especially important to clinical practice. For example, immediacy, which means whether the students were readily approachable on all social occasions, correlated with all the SPEF-R domains, with r-values ranging from 0.45 to 0.33. Other sub-skills, such as empathy, interaction management, and supportiveness, were also found to be significantly correlated to most of the SPEF-R domains. Meanwhile, the ICCS scores correlated differently with the co-worker communication domain (r = 0.51, p < 0.01) and the communication with the service user domain (r = 0.39, p < 0.05). It suggested that different communication skill sets would be required for different interpersonal contexts within the workplace. Conclusion: Students' self-perceived interpersonal communication competence could predict their actual performance during clinical placement. Moreover, some specific communication skills were more important to the co-worker communication but not to the daily interaction with the service users. There were implications on how to better prepare the students to meet the future challenge upon graduation.Keywords: interpersonal competence, clinical education, healthcare professional education, occupational therapy, occupational therapy students
Procedia PDF Downloads 72400 Bacterial Diversity in Human Intestinal Microbiota and Correlations with Nutritional Behavior, Physiology, Xenobiotics Intake and Antimicrobial Resistance in Obese, Overweight and Eutrophic Individuals
Authors: Thais O. de Paula, Marjorie R. A. Sarmiento, Francis M. Borges, Alessandra B. Ferreira-Machado, Juliana A. Resende, Dioneia E. Cesar, Vania L. Silva, Claudio G. Diniz
Abstract:
Obesity is currently a worldwide public health threat, being considered a pandemic multifactorial disease related to the human gut microbiota (GM). Add to that GM is considered an important reservoir of antimicrobial resistance genes (ARG) and little is known on GM and ARG in obesity, considering the altered physiology and xenobiotics intake. As regional and social behavior may play important roles in GM modulation, and most of the studies are based on small sample size and various methodological approaches resulting in difficulties for data comparisons, this study was focused on the investigation of GM bacterial diversity in obese (OB), overweight (OW) and eutrophic individuals (ET) considering their nutritional, clinical and social characteristics; and comparative screening of AGR related to their physiology and xenobiotics intake. Microbial community was accessed by FISH considering phyla as a taxonomic level, and PCR-DGGE followed by dendrograms evaluation (UPGMA method) from fecal metagenome of 72 volunteers classified according to their body mass index (BMI). Nutritional, clinical, social parameters and xenobiotics intake were recorded for correlation analysis. The fecal metagenome was also used as template for PCR targeting 59 different ARG. Overall, 62% of OB were hypertensive, and 12% or 4% were, regarding the OW and ET individuals. Most of the OB were rated as low income (80%). Lower relative bacterial densities were observed in the OB compared to ET for almost all studied taxa (p < 0.05) with Firmicutes/Bacteroidetes ratio increased in the OB group. OW individuals showed a bacterial density representative of GM more likely to the OB. All the participants were clustered in 3 different groups based on the PCR-DGGE fingerprint patterns (C1, C2, C3), being OB mostly grouped in C1 (83.3%) and ET mostly grouped in C3 (50%). The cluster C2 showed to be transitional. Among 27 ARG detected, a cluster of 17 was observed in all groups suggesting a common core. In general, ARG were observed mostly within OB individuals followed by OW and ET. The ratio between ARG and bacterial groups may suggest that AGR were more related to enterobacteria. Positive correlations were observed between ARG and BMI, calories and xenobiotics intake (especially use of sweeteners). As with nutritional and clinical characteristics, our data may suggest that GM of OW individuals behave in a heterogeneous pattern, occasionally more likely to the OB or to the ET. Regardless the regional and social behaviors of our population, the methodological approaches in this study were complementary and confirmatory. The imbalance of GM over the health-disease interface in obesity is a matter of fact, but its influence in host's physiology is still to be clearly elucidated to help understanding the multifactorial etiology of obesity. Although the results are in agreement with observations that GM is altered in obesity, the altered physiology in OB individuals seems to be also associated to the increased xenobiotics intake and may interfere with GM towards antimicrobial resistance, as observed by the fecal metagenome and ARG screening. Support: FAPEMIG, CNPQ, CAPES, PPGCBIO/UFJF.Keywords: antimicrobial resistance, bacterial diversity, gut microbiota, obesity
Procedia PDF Downloads 170399 A Comparative Human Rights Analysis of Expulsion as a Counterterrorism Instrument: An Evaluation of Belgium
Authors: Louise Reyntjens
Abstract:
Where criminal law used to be the traditional response to cope with the terrorist threat, European governments are increasingly relying on administrative paths. The reliance on immigration law fits into this trend. Terrorism is seen as a civilization menace emanating from abroad. In this context, the expulsion of dangerous aliens, immigration law’s core task, is put forward as a key security tool. Governments all over Europe are focusing on removing dangerous individuals from their territory rather than bringing them to justice. This research reflects on the consequences for the expelled individuals’ fundamental rights. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues, igniting the recourse to immigration law as a counterterrorism tool. Yet, they adopt a very different approach on this: the United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand, also 'securitized' its immigration policy after the recent terrorist hit in Stockholm, but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This paper addresses the situation in Belgium. In 2017, the Belgian parliament introduced several legislative changes by which it considerably expanded and facilitated the possibility to expel unwanted aliens. First, the expulsion measure was subjected to new and questionably definitions: a serious attack on the nation’s safety used to be required to expel certain categories of aliens. Presently, mere suspicions suffice to fulfil the new definition of a 'serious threat to national security'. A definition which fails to respond to the principle of legality; the law, nor the prepatory works clarify what is meant by 'a threat to national security'. This creates the risk of submitting this concept’s interpretation almost entirely to the discretion of the immigration authorities. Secondly, in name of intervening more quickly and efficiently, the automatic suspensive appeal for expulsions was abolished. The European Court of Human Rights nonetheless requires such an automatic suspensive appeal under Article 13 and 3 of the Convention. Whether this procedural reform will stand to endure, is thus questionable. This contribution also raises questions regarding expulsion’s efficacy as a key security tool. In a globalized and mobilized world, particularly in a European Union with no internal boundaries, questions can be raised about the usefulness of this measure. Even more so, by simply expelling a dangerous individual, States avoid their responsibility and shift the risk to another State. Criminal law might in these instances be more capable of providing a conclusive and long term response. This contribution explores the human rights consequences of expulsion as a security tool in Belgium. It also offers a critical view on its efficacy for protecting national security.Keywords: Belgium, counter-terrorism and human rights, expulsion, immigration law
Procedia PDF Downloads 127398 Association of Zinc with New Generation Cardiovascular Risk Markers in Childhood Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Zinc is a vital element required for growth and development. This fact makes zinc important, particularly for children. It maintains normal cellular structure and functions. This essential element appears to have protective effects against coronary artery disease and cardiomyopathy. Higher serum zinc levels are associated with lower risk of cardiovascular diseases (CVDs). There is a significant association between low serum zinc levels and heart failure. Zinc may be a potential biomarker of cardiovascular health. High sensitive cardiac troponin T (hs-cTnT) and cardiac myosin binding protein C (cMyBP-C) are new generation markers used for prediagnosis, diagnosis, and prognosis of CVDs. The aim of this study is to determine zinc as well as new generation cardiac markers profiles in children with normal body mass index (N-BMI), obese (OB), morbid obese (MO) children, and children with metabolic syndrome (MetS) findings. The association among them will also be investigated. Four study groups were constituted. The study protocol was approved by the institutional Ethics Committee of Tekirdag Namik Kemal University. Parents of the participants filled informed consent forms to participate in the study. Group 1 is composed of 44 children with N-BMI. Group 2 and Group 3 comprised 43 OB and 45 MO children, respectively. Forty-five MO children with MetS findings were included in Group 4. World Health Organization age- and sex-adjusted BMI percentile tables were used to constitute groups. These values were 15-85, 95-99, and above 99 for N-BMI, OB, and MO, respectively. Criteria for MetS findings were determined. Routine biochemical analyses, including zinc, were performed. High sensitive-cTnT and cMyBP-C concentrations were measured by kits based on enzyme-linked immunosorbent assay principle. Appropriate statistical tests within the scope of SPSS were used for the evaluation of the study data. p<0.05 was accepted as statistically significant. Four groups were matched for age and gender. Decreased zinc concentrations were measured in Groups 2, 3, and 4 compared to Group 1. Groups did not differ from one another in terms of hs-cTnT. There were statistically significant differences between cMyBP-C levels of MetS group and N-BMI as well as OB groups. There was an increasing trend going from N-BMI group to MetS group. There were statistically significant negative correlations between zinc and hs-cTnT as well as cMyBP-C concentrations in MetS group. In conclusion, inverse correlations detected between zinc and new generation cardiac markers (hs-TnT and cMyBP-C) have pointed out that decreased levels of this physiologically essential trace element accompany increased levels of hs-cTnT as well as cMyBP-C in children with MetS. This finding emphasizes that both zinc and these new generation cardiac markers may be evaluated as biomarkers of cardiovascular health during severe childhood obesity precipitated with MetS findings and also suggested as the messengers of the future risk in the adulthood periods of children with MetS.Keywords: cardiac myosin binding protein-C, cardiovascular diseases, children, high sensitive cardiac troponin T, obesity
Procedia PDF Downloads 111397 Knowledge, Attitude, and Practices of Nurses on the Pain Assessment and Management in Level 3 Hospitals in Manila
Authors: Florence Roselle Adalin, Misha Louise Delariarte, Fabbette Laire Lagas, Sarah Emanuelle Mejia, Lika Mizukoshi, Irish Paullen Palomeno, Gibrianne Alistaire Ramos, Danica Pauline Ramos, Josefina Tuazon, Jo Leah Flores
Abstract:
Pain, often a missed and undertreated symptom, affects the quality of life of individuals. Nurses are key players in providing effective pain management to decrease morbidity and mortality of patients in pain. Nurses’ knowledge and attitude on pain greatly affect their ability on assessment and management. The Pain Society of the Philippines recognized the inadequacy and inaccessibility of data on the knowledge, skills, and attitude of nurses on pain management in the country. This study may be the first of its kind in the county, giving it the potential to contribute greatly to nursing education and practice through providing valuable baseline data. Objectives: This study aims to describe the level of knowledge and attitude, and current practices of nurses on pain assessment and management; and determine the relationship of nurses’ knowledge and attitude with years of experience, training on pain management and clinical area of practice. Methodology: A survey research design was employed. Four hospitals were selected through purposive sampling. A total of 235 Medical-Surgical Unit and Intensive Care Unit (ICU) nurses participated in the study. The tool used is a combination of demographic survey, Nurses’ Knowledge and Attitude Survey Regarding Pain (NKASRP), Acute Pain Evidence Based Practice Questionnaire (APEBPQ) with self-report questions on non-pharmacologic pain management. The data obtained was analysed using descriptive statistics, two sample T-tests for clinical areas and training; and Pearson product correlation to identify relationship of level of knowledge and attitude with years of experience. Results and Analysis: The mean knowledge and attitude score of the nurses was 47.14%. Majority answered ‘most of the time’ or ‘all the time’ on 84.12% of practice items on pain assessment, implementation of non-pharmacologic interventions, evaluation and documentation. Three of 19 practice items describing morphine and opioid administration in special populations were only done ‘a little of the time’. Most utilized non-pharmacologic interventions were deep breathing exercises (79.66%), massage therapy (27.54%), and ice therapy (26.69%). There was no significant relationship between knowledge scores and years of clinical experience (p = 0.05, r= -0.09). Moreover, there was not enough evidence to show difference in nurses’ knowledge and attitude scores in relation to presence of training (p = 0.41) or areas (Medical-Surgical or ICU) of clinical practice (p = 0.53). Conclusion and Recommendations: Findings of the study showed that the level of knowledge and attitude of nurses on pain assessment and management is suboptimal; and no relationship between nurses’ knowledge and attitude and years of experience. It is recommended that further studies look into the nursing curriculum on pain education, culture-specific pain management protocols and evidence-based practices in the country.Keywords: knowledge and attitude, nurses, pain management, practices on pain management
Procedia PDF Downloads 348396 The Distribution and Environmental Behavior of Heavy Metals in Jajarm Bauxite Mine, Northeast Iran
Authors: Hossein Hassani, Ali Rezaei
Abstract:
Heavy metals are naturally occurring elements that have a high atomic weight and a density at least five times greater than that of water. Their multiple industrial, domestic, agricultural, medical, and technological applications have led to their wide distribution in the environment, raising concerns over their potential effects on human health and the environment. Environmental protection against various pollutants, such as heavy metals formed by industries, mines and modern technologies, is a concern for researchers and industry. In order to assess the contamination of soils the distribution and environmental behavior have been investigated. Jajarm bauxite mine, the most important deposits have been discovered in Iran, which is about 22 million tons of reserve, and is the main mineral of the Diaspora. With a view to estimate the heavy metals ratio of the Jajarm bauxite mine area and to evaluate the pollution level, 50 samples have been collected and have been analyzed for the heavy metals of As, Cd, Cu, Hg, Ni and Pb with the help of Inductively Coupled Plasma-Mass Spectrometer (ICP- MS). In this study, we have dealt with determining evaluation criteria including contamination factor (CF), average concentration (AV), enrichment factor (EF) and geoaccumulation index (GI) to assess the risk of pollution from heavy metals(As, Cd, Cu, Hg, Ni and Pb) in Jajarm bauxite mine. In the samples of the studied, the average of recorded concentration of elements for Arsenic, Cadmium, Copper, Mercury, Nickel and Lead are 18, 0.11, 12, 0.07, 58 and 51 (mg/kg) respectively. The comparison of the heavy metals concentration average and the toxic potential in the samples has shown that an average with respect to the world average of the uncontaminated soil amounts. The average of Pb and As elements shows a higher quantity with respect to the world average quantity. The pollution factor for the study elements has been calculated on the basis of the soil background concentration and has been categorized on the basis of the uncontaminated world soil average with respect to the Hakanson classification. The calculation of the corrected pollutant degree shows the degree of the bulk intermediate pollutant (1.55-2.0) for the average soil sampling of the study area which is on the basis of the background quantity and the world average quantity of the uncontaminated soils. The provided conclusion from calculation of the concentrated factor, for some of the samples show that the average of the lead and arsenic elements stations are more than the background values and the unnatural metal concentration are covered under the study area, That's because the process of mining and mineral extraction. Given conclusion from the calculation of Geoaccumulation index of the soil sampling can explain that the copper, nickel, cadmium, arsenic, lead and mercury elements are Uncontamination. In general, the results indicate that the Jajarm bauxite mine of heavy metal pollution is uncontaminated area and extract the mineral from the mine, not create environmental hazards in the region.Keywords: enrichment factor, geoaccumulation index, heavy metals, Jajarm bauxite mine, pollution
Procedia PDF Downloads 291