Search results for: optimized criteria
315 Convective Boiling of CO₂/R744 in Macro and Micro-Channels
Authors: Adonis Menezes, J. C. Passos
Abstract:
The current panorama of technology in heat transfer and the scarcity of information about the convective boiling of CO₂ and hydrocarbon in small diameter channels motivated the development of this work. Among non-halogenated refrigerants, CO₂/ R744 has distinct thermodynamic properties compared to other fluids. The R744 presents significant differences in operating pressures and temperatures, operating at higher values compared to other refrigerants, and this represents a challenge for the design of new evaporators, as the original systems must normally be resized to meet the specific characteristics of the R744, which creates the need for a new design and optimization criteria. To carry out the convective boiling tests of CO₂, an experimental apparatus capable of storing (m= 10kg) of saturated CO₂ at (T = -30 ° C) in an accumulator tank was used, later this fluid was pumped using a positive displacement pump with three pistons, and the outlet pressure was controlled and could reach up to (P = 110bar). This high-pressure saturated fluid passed through a Coriolis type flow meter, and the mass velocities varied between (G = 20 kg/m².s) up to (G = 1000 kg/m².s). After that, the fluid was sent to the first test section of circular cross-section in diameter (D = 4.57mm), where the inlet and outlet temperatures and pressures, were controlled and the heating was promoted by the Joule effect using a source of direct current with a maximum heat flow of (q = 100 kW/m²). The second test section used a cross-section with multi-channels (seven parallel channels) with a square cross-section of (D = 2mm) each; this second test section has also control of temperature and pressure at the inlet and outlet as well as for heating a direct current source was used, with a maximum heat flow of (q = 20 kW/m²). The fluid in a biphasic situation was directed to a parallel plate heat exchanger so that it returns to the liquid state, thus being able to return to the accumulator tank, continuing the cycle. The multi-channel test section has a viewing section; a high-speed CMOS camera was used for image acquisition, where it was possible to view the flow patterns. The experiments carried out and presented in this report were conducted in a rigorous manner, enabling the development of a database on the convective boiling of the R744 in macro and micro channels. The analysis prioritized the processes from the beginning of the convective boiling until the drying of the wall in a subcritical regime. The R744 resurfaces as an excellent alternative to chlorofluorocarbon refrigerants due to its negligible ODP (Ozone Depletion Potential) and GWP (Global Warming Potential) rates, among other advantages. The results found in the experimental tests were very promising for the use of CO₂ in micro-channels in convective boiling and served as a basis for determining the flow pattern map and correlation for determining the heat transfer coefficient in the convective boiling of CO₂.Keywords: convective boiling, CO₂/R744, macro-channels, micro-channels
Procedia PDF Downloads 143314 Bioclimatic Devices in the Historical Rural Building: A Carried out Analysis on Some Rural Architectures in Puglia
Authors: Valentina Adduci
Abstract:
The developing research aims to define in general the criteria of environmental sustainability of rural buildings in Puglia and particularly in the manor farm. The main part of the study analyzes the relationship / dependence between the rural building and the landscape which, after many stratifications, results clearly identified and sometimes also characterized in a positive way. The location of the manor farm, in fact, is often conditioned by the infrastructural network and by the structure of the agricultural landscape. The manor farm, without the constraints due to the urban pattern’s density, was developed in accordance with a logical settlement that gives priority to the environmental aspects. These vernacular architectures are the most valuable example of how our ancestors have planned their dwellings according to nature. The 237 farms, analysis’ object, have been reported in cartography through the GIS system; a symbol has been assigned to each of them to identify the architectural typology and a different color for the historical period of construction. A datasheet template has been drawn up, and it has made possible a deeper understanding of each manor farm. This method provides a faster comparison of the most recurring characters in all the considered buildings, except for those farms which benefited from special geographical conditions, such as proximity to the road network or waterways. Below there are some of the most frequently constants derived from the statistical study of the examined buildings: southeast orientation of the main facade; placement of the sheep pen on the ground tilted and exposed to the south side; larger windowed surface on the south elevation; smaller windowed surface on the north elevation; presence of shielding vegetation near the more exposed elevations to the solar radiation; food storage’s rooms located on the ground floor or in the basement; animal shelter located in north side of the farm; presence of tanks and wells, sometimes combined with a very accurate channeling storm water system; thick layers of masonry walls, inside of which were often obtained hollow spaces to house stairwells or depots for the food storage; exclusive use of local building materials. The research aims to trace the ancient use of bioclimatic constructive techniques in the Apulian rural architecture and to define those that derive from an empirical knowledge and those that respond to an already encoded design. These constructive expedients are especially useful to obtain an effective passive cooling, to promote the natural ventilation and to built ingenious systems for the recovery and the preservation of rainwater and are still found in some of the manor farms analyzed, most of them are, today, in a serious state of neglect.Keywords: bioclimatic devices, farmstead, rural landscape, sustainability
Procedia PDF Downloads 383313 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany
Authors: Karin Schakib-Ekbatan, Annette Roser
Abstract:
According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings
Procedia PDF Downloads 126312 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History
Authors: Carmen Noheda
Abstract:
This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music
Procedia PDF Downloads 190311 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach
Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira
Abstract:
Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers
Procedia PDF Downloads 156310 A Therapeutic Approach for Bromhidrosis with Glycopyrrolate 2% Cream: Clinical Study of 20 Patients
Authors: Vasiliki Markantoni, Eftychia Platsidaki, Georgios Chaidemenos, Georgios Kontochristopoulos
Abstract:
Introduction: Bromhidrosis, also known as osmidrosis, is a common distressing condition with a significant negative effect on patient’s quality of life. Its etiology is multifactorial. It usually affects axilla, genital skin, breasts and soles, areas where apocrine glands are mostly distributed. Therapeutic treatments include topical antibacterial agents, antiperspirants and neuromuscular blocker agents-toxins. In this study, we aimed to evaluate the efficacy and possible complications of topical glycopyrrolate, an anticholinergic agent, for treatment of bromhidrosis. Glycopyrrolate, applied topically as a cream, solution or spray at concentrations between 0,5% and 4%, has been successfully used to treat different forms of focal hyperhidrosis. Materials and Methods: Twenty patients, six males and fourteen females, meeting the criteria for bromhidrosis were treated with topical glycopyrrolate for two months. The average age was 36. Eleven patients had bromhidrosis located to the axillae, four to the soles, four to both axillae and soles and one to the genital folds. Glycopyrrolate was applied topically as a cream at concentration 2%, formulated in Fitalite. During the first month, patients were using the cream every night and thereafter twice daily. The degree of malodor was assessed subjectively by patients and scaled averagely as ‘none’, ‘mild’, ‘moderate’, and ‘severe’ with corresponding scores of 0, 1, 2, and 3, respectively. The modified Dermatology Life Quality Index (DLQI) was used to assess the quality of life. The clinical efficacy was graded by the patient scale of excellent, good, fair and poor. In the end, patients were given the power to evaluate whether they were totally satisfied with, partially satisfied or unsatisfied and possible side effects during the treatment were recorded. Results: All patients were satisfied at the end of the treatment. No patient defined the response as no improvement. The subjectively assessed score level of bromhidrosis was remarkably improved after the first month of treatment and improved slightly more after the second month. DLQI score was also improved to all patients. Adverse effects were reported in 2 patients. In the first case, topical irritation was reported. This was classed as mild (erythema and desquamation), appeared during the second month of treatment and was treated with low-potency topical corticosteroids. In the second case, mydriasis was reported, that recovered without specific treatment, as soon as we insisted to the importance of careful hygiene after cream application so as not to contaminate the periocular skin or ocular surface. Conclusions: Dermatologists often encounter patients with bromhidrosis, therefore should be aware of treatment options. To the best of our knowledge, this is the first study to evaluate the use of topical glycopyrrolate as a therapeutic approach for bromhidrosis. Our findings suggest that topical glycopyrrolate has an excellent safety profile and demonstrate encouraging results for the management of this distressful condition.Keywords: Bromhidrosis, glycopyrrolate, topical treatment, osmidrosis
Procedia PDF Downloads 167309 Thermal Characterisation of Multi-Coated Lightweight Brake Rotors for Passenger Cars
Authors: Ankit Khurana
Abstract:
The sufficient heat storage capacity or ability to dissipate heat is the most decisive parameter to have an effective and efficient functioning of Friction-based Brake Disc systems. The primary aim of the research was to analyse the effect of multiple coatings on lightweight disk rotors surface which not only alleviates the mass of vehicle & also, augments heat transfer. This research is projected to aid the automobile fraternity with an enunciated view over the thermal aspects in a braking system. The results of the project indicate that with the advent of modern coating technologies a brake system’s thermal curtailments can be removed and together with forced convection, heat transfer processes can see a drastic improvement leading to increased lifetime of the brake rotor. Other advantages of modifying the surface of a lightweight rotor substrate will be to reduce the overall weight of the vehicle, decrease the risk of thermal brake failure (brake fade and fluid vaporization), longer component life, as well as lower noise and vibration characteristics. A mathematical model was constructed in MATLAB which encompassing the various thermal characteristics of the proposed coatings and substrate materials required to approximate the heat flux values in a free and forced convection environment; resembling to a real-time braking phenomenon which could easily be modelled into a full cum scaled version of the alloy brake rotor part in ABAQUS. The finite element of a brake rotor was modelled in a constrained environment such that the nodal temperature between the contact surfaces of the coatings and substrate (Wrought Aluminum alloy) resemble an amalgamated solid brake rotor element. The initial results obtained were for a Plasma Electrolytic Oxidized (PEO) substrate wherein the Aluminum alloy gets a hard ceramic oxide layer grown on its transitional phase. The rotor was modelled and then evaluated in real-time for a constant ‘g’ braking event (based upon the mathematical heat flux input and convective surroundings), which reflected the necessity to deposit a conducting coat (sacrificial) above the PEO layer in order to inhibit thermal degradation of the barrier coating prematurely. Taguchi study was then used to bring out certain critical factors which may influence the maximum operating temperature of a multi-coated brake disc by simulating brake tests: a) an Alpine descent lasting 50 seconds; b) an Autobahn stop lasting 3.53 seconds; c) a Six–high speed repeated stop in accordance to FMVSS 135 lasting 46.25 seconds. Thermal Barrier coating thickness and Vane heat transfer coefficient were the two most influential factors and owing to their design and manufacturing constraints a final optimized model was obtained which survived the 6-high speed stop test as per the FMVSS -135 specifications. The simulation data highlighted the merits for preferring Wrought Aluminum alloy 7068 over Grey Cast Iron and Aluminum Metal Matrix Composite in coherence with the multiple coating depositions.Keywords: lightweight brakes, surface modification, simulated braking, PEO, aluminum
Procedia PDF Downloads 408308 Sustainable Harvesting, Conservation and Analysis of Genetic Diversity in Polygonatum Verticillatum Linn.
Authors: Anchal Rana
Abstract:
Indian Himalayas with their diverse climatic conditions are home to many rare and endangered medicinal flora. One such species is Polygonatum verticillatum Linn., popularly known as King Solomon’s Seal or Solomon’s Seal. Its mention as an incredible medicinal herb comes from 5000 years ago in Indian Materia Medica as a component of Ashtavarga, a poly-herbal formulation comprising of eight herbs illustrated as world’s first ever revitalizing and rejuvenating nutraceutical food, which is now commercialised in the name ‘Chaywanprash’. It is an erect tall (60 to 120 cm) perennial herb with sessile, linear leaves and white pendulous flowers. The species grows well in an altitude range of 1600 to 3600 m amsl, and propagates mostly through rhizomes. The rhizomes are potential source for significant phytochemicals like flavonoids, phenolics, lectins, terpenoids, allantoin, diosgenin, β-Sitosterol and quinine. The presence of such phytochemicals makes the species an asset for antioxidant, cardiotonic, demulcent, diuretic, energizer, emollient, aphrodisiac, appetizer, glactagogue, etc. properties. Having profound concentrations of macro and micronutrients, species has fine prospects of being used as a diet supplement. However, due to unscientific and gregarious uprooting, it has been assigned a status of ‘vulnerable’ and ‘endangered’ in the Conservation Assessment and Management Plan (CAMP) process conducted by Foundation for Revitalisation of Local Health Traditions (FRLHT) during 2010, according to IUCN Red-List Criteria. Further, destructive harvesting, land use disturbances, heavy livestock grazing, climatic changes and habitat fragmentation have substantially contributed towards anomaly of the species. It, therefore, became imperative to conserve the diversity of the species and make judicious use in future research and commercial programme and schemes. A Gene Bank was therefore established at High Altitude Herbal Garden of the Forest Research Institute, Dehradun, India situated at Chakarata (30042’52.99’’N, 77051’36.77’’E, 2205 m amsl) consisting 149 accessions collected from thirty-one geographical locations spread over three Himalayan States of Jammu and Kashmir, Himachal Pradesh, and Uttarakhand. The present investigations purport towards sampling and collection of divergent germplasm followed by planting and cultivation techniques. The ultimate aim is thereby focussed on analysing genetic diversity of the species and capturing promising genotypes for carrying out further genetic improvement programme so to contribute towards sustainable development and healthcare.Keywords: Polygonatum verticillatum Linn., phytochemicals, genetic diversity, conservation, gene bank
Procedia PDF Downloads 171307 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study
Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic
Abstract:
Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS
Procedia PDF Downloads 305306 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis
Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame
Abstract:
Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain
Procedia PDF Downloads 87305 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 80304 Semantic Differential Technique as a Kansei Engineering Tool to Enquire Public Space Design Requirements: The Case of Parks in Tehran
Authors: Nasser Koleini Mamaghani, Sara Mostowfi
Abstract:
The complexity of public space design makes it difficult for designers to simultaneously consider all issues for thorough decision-making. Among public spaces, the public space around people’s house is the most prominent space that affects and impacts people’s daily life. Considering recreational public spaces in cities, their main purpose would be to design for experiences that enable a deep feeling of peace and a moment of being away from the hectic daily life. Respecting human emotions and restoring natural environments, although difficult and to some extent out of reach, are key issues for designing such spaces. In this paper we propose to analyse the structure of recreational public spaces and the related emotional impressions. Furthermore, we suggest investigating how these structures influence people’s choice for public spaces by using differential semantics. According to Kansei methodology, in order to evaluate a situation appropriately, the assessment variables must be adapted to the user’s mental scheme. This means that the first step would have to be the identification of a space’s conceptual scheme. In our case study, 32 Kansei words and 4 different locations, each with a different sensual experience, were selected. The 4 locations were all parks in the city of Tehran (Iran), each with a unique structure and artifacts such as a fountain, lighting, sculptures, and music. It should be noted that each of these parks has different combination and structure of environmental and artificial elements like: fountain, lightning, sculpture, music (sound) and so forth. The first one was park No.1, a park with natural environment, the selected space was a fountain with motion light and sculpture. The second park was park No.2, in which there are different styles of park construction: ways from different countries, the selected space was traditional Iranian architecture with a fountain and trees. The third one was park No.3, the park with modern environment and spaces, and included a fountain that moved according to music and lighting. The fourth park was park No.4, the park with combination of four elements: water, fire, earth, wind, the selected space was fountains squirting water from the ground up. 80 participant (55 males and 25 females) aged from 20-60 years participated in this experiment. Each person filled the questionnaire in the park he/she was in. Five-point semantic differential scale was considered to determine the relation between space details and adjectives (kansei words). Received data were analyzed by multivariate statistical technique (factor analysis using SPSS statics). Finally the results of this analysis are criteria as inspiration which can be used in future space designing for creating pleasant feeling in users.Keywords: environmental design, differential semantics, Kansei engineering, subjective preferences, space
Procedia PDF Downloads 407303 Anti-DNA Antibodies from Patients with Schizophrenia Hydrolyze DNA
Authors: Evgeny A. Ermakov, Lyudmila P. Smirnova, Valentina N. Buneva
Abstract:
Schizophrenia associated with dysregulation of neurotransmitter processes in the central nervous system and disturbances in the humoral immune system resulting in the formation of antibodies (Abs) to the various components of the nervous tissue. Abs to different neuronal receptors and DNA were detected in the blood of patients with schizophrenia. Abs hydrolyzing DNA were detected in pool of polyclonal autoantibodies in autoimmune and infectious diseases, such catalytic Abs were named abzymes. It is believed that DNA-hydrolyzing abzymes are cytotoxic, cause nuclear DNA fragmentation and induce cell death by apoptosis. Abzymes with DNAase activity are interesting because of the mechanism of formation and the possibility of use as diagnostic markers. Therefore, in our work we have set following goals: to determine the level anti-DNA Abs in the serum of patients with schizophrenia and to study DNA-hydrolyzing activity of IgG of patients with schizophrenia. Materials and methods: In our study there were included 41 patients with a verified diagnosis of paranoid or simple schizophrenia and 24 healthy donors. Electrophoretically and immunologically homogeneous IgGs were obtained by sequential affinity chromatography of the serum proteins on protein G-Sepharose and gel filtration. The levels of anti-DNA Abs were determined using ELISA. DNA-hydrolyzing activity was detected as the level of supercoiled pBluescript DNA transition in circular and linear forms, the hydrolysis products were analyzed by agarose electrophoresis followed by ethidium bromide stain. To correspond the registered catalytic activity directly to the antibodies we carried out a number of strict criteria: electrophoretic homogeneity of the antibodies, gel filtration (acid shock analysis) and in situ activity. Statistical analysis was performed in ‘Statistica 9.0’ using the non-parametric Mann-Whitney test. Results: The sera of approximately 30% of schizophrenia patients displayed a higher level of Abs interacting with single-stranded (ssDNA) and double-stranded DNA (dsDNA) compared with healthy donors. The average level of Abs interacting with ssDNA was only 1.1-fold lower than that for interacting with dsDNA. IgG of patient with schizophrenia were shown to possess DNA hydrolyzing activity. Using affinity chromatography, electrophoretic analysis of isolated IgG homogeneity, gel filtration in acid shock conditions and in situ DNAse activity analysis we proved that the observed activity is intrinsic property of studied antibodies. We have shown that the relative DNAase activity of IgG in patients with schizophrenia averaged 55.4±32.5%, IgG of healthy donors showed much lower activity (average of 9.1±6.5%). It should be noted that DNAase activity of IgG in patients with schizophrenia with a negative symptoms was significantly higher (73.3±23.8%), than in patients with positive symptoms (43.3±33.1%). Conclusion: Anti-DNA Abs of patients with schizophrenia not only bind DNA, but quite efficiently hydrolyze the substrate. The data show a correlation with the level of DNase activity and leading symptoms of patients with schizophrenia.Keywords: anti-DNA antibodies, abzymes, DNA hydrolysis, schizophrenia
Procedia PDF Downloads 328302 A Comparison of Three Different Modalities in Improving Oral Hygiene in Adult Orthodontic Patients: An Open-Label Randomized Controlled Trial
Authors: Umair Shoukat Ali, Rashna Hoshang Sukhia, Mubassar Fida
Abstract:
Introduction: The objective of the study was to compare outcomes in terms of Bleeding index (BI), Gingival Index (GI), and Orthodontic Plaque Index (OPI) with video graphics and plaque disclosing tablets (PDT) versus verbal instructions in adult orthodontic patients undergoing fixed appliance treatment (FAT). Materials and Methods: Adult orthodontic patients have recruited from outpatient orthodontic clinics who fulfilled the inclusion criteria and were randomly allocated to three groups i.e., video, PDT, and verbal groups. We included patients undergoing FAT for six months of both genders with all teeth bonded mesial to first molars having no co-morbid conditions such as rheumatic fever and diabetes mellitus. Subjects who had gingivitis as assessed by Bleeding Index (BI), Gingival Index (GI), and Orthodontic Plaque Index (OPI) were recruited. We excluded subjects having > 2 mm of clinical attachment loss, pregnant and lactating females, any history of periodontal therapy within the last six months, and any consumption of antibiotics or anti-inflammatory drugs within the last one month. Pre- and post-interventional measurements were taken at two intervals only for BI, GI, and OPI. The primary outcome of this trial was to evaluate the mean change in the BI, GI, and OPI in the three study groups. A computer-generated randomization list was used to allocate subjects to one of the three study groups using a random permuted block sampling of 6 and 9 to randomize the samples. No blinding of the investigator or the participants was performed. Results: A total of 99 subjects were assessed for eligibility, out of which 96 participants were randomized as three of the participants declined to be part of this trial. This resulted in an equal number of participants (32) that were analyzed in all three groups. The mean change in the oral hygiene indices score was assessed, and we found no statistically significant difference among the three interventional groups. Pre- and post-interventional results showed statistically significant improvement in the oral hygiene indices for the video and PDT groups. No statistically significant difference for age, gender, and education level on oral hygiene indices were found. Simple linear regression showed that the video group produced significantly higher mean OPI change as compared to other groups. No harm was observed during the trial. Conclusions: Visual aids performed better as compared to the verbal group. Gender, age, and education level had no statistically significant impact on the oral hygiene indices. Longer follow-ups will be required to see the long-term effects of these interventions. Trial Registration: NCT04386421 Funding: Aga Khan University and Hospital (URC 183022)Keywords: oral hygiene, orthodontic treatment, adults, randomized clinical trial
Procedia PDF Downloads 118301 Electroactive Ferrocenyl Dendrimers as Transducers for Fabrication of Label-Free Electrochemical Immunosensor
Authors: Sudeshna Chandra, Christian Gäbler, Christian Schliebe, Heinrich Lang
Abstract:
Highly branched dendrimers provide structural homogeneity, controlled composition, comparable size to biomolecules, internal porosity and multiple functional groups for conjugating reactions. Electro-active dendrimers containing multiple redox units have generated great interest in their use as electrode modifiers for development of biosensors. The electron transfer between the redox-active dendrimers and the biomolecules play a key role in developing a biosensor. Ferrocenes have multiple and electrochemically equivalent redox units that can act as electron “pool” in a system. The ferrocenyl-terminated polyamidoamine dendrimer is capable of transferring multiple numbers of electrons under the same applied potential. Therefore, they can be used for dual purposes: one in building a film over the electrode for immunosensors and the other for immobilizing biomolecules for sensing. Electrochemical immunosensor, thus developed, exhibit fast and sensitive analysis, inexpensive and involve no prior sample pre-treatment. Electrochemical amperometric immunosensors are even more promising because they can achieve a very low detection limit with high sensitivity. Detection of the cancer biomarkers at an early stage can provide crucial information for foundational research of life science, clinical diagnosis and prevention of disease. Elevated concentration of biomarkers in body fluid is an early indication of some type of cancerous disease and among all the biomarkers, IgG is the most common and extensively used clinical cancer biomarkers. We present an IgG (=immunoglobulin) electrochemical immunosensor using a newly synthesized redox-active ferrocenyl dendrimer of generation 2 (G2Fc) as glassy carbon electrode material for immobilizing the antibody. The electrochemical performance of the modified electrodes was assessed in both aqueous and non-aqueous media using varying scan rates to elucidate the reaction mechanism. The potential shift was found to be higher in an aqueous electrolyte due to presence of more H-bond which reduced the electrostatic attraction within the amido groups of the dendrimers. The cyclic voltammetric studies of the G2Fc-modified GCE in 0.1 M PBS solution of pH 7.2 showed a pair of well-defined redox peaks. The peak current decreased significantly with the immobilization of the anti-goat IgG. After the immunosensor is blocked with BSA, a further decrease in the peak current was observed due to the attachment of the protein BSA to the immunosensor. A significant decrease in the current signal of the BSA/anti-IgG/G2Fc/GCE was observed upon immobilizing IgG which may be due to the formation of immune-conjugates that blocks the tunneling of mass and electron transfer. The current signal was found to be directly related to the amount of IgG captured on the electrode surface. With increase in the concentration of IgG, there is a formation of an increasing amount of immune-conjugates that decreased the peak current. The incubation time and concentration of the antibody was optimized for better analytical performance of the immunosensor. The developed amperometric immunosensor is sensitive to IgG concentration as low as 2 ng/mL. Tailoring of redox-active dendrimers provides enhanced electroactivity to the system and enlarges the sensor surface for binding the antibodies. It may be assumed that both electron transfer and diffusion contribute to the signal transformation between the dendrimers and the antibody.Keywords: ferrocenyl dendrimers, electrochemical immunosensors, immunoglobulin, amperometry
Procedia PDF Downloads 337300 Interconnections of Circular Economy, Circularity, and Sustainability: A Systematic Review and Conceptual Framework
Authors: Anteneh Dagnachew Sewenet, Paola Pisano
Abstract:
The concept of circular economy, circularity, and sustainability are interconnected and promote a more sustainable future. However, previous studies have mainly focused on each concept individually, neglecting the relationships and gaps in the existing literature. This study aims to integrate and link these concepts to expand the theoretical and practical methods of scholars and professionals in pursuit of sustainability. The aim of this systematic literature review is to comprehensively analyze and summarize the interconnections between circular economy, circularity, and sustainability. Additionally, it seeks to develop a conceptual framework that can guide practitioners and serve as a basis for future research. The review employed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. A total of 78 articles were analyzed, utilizing the Scopus and Web of Science databases. The analysis involved summarizing and systematizing the conceptualizations of circularity and its relationship with the circular economy and long-term sustainability. The review provided a comprehensive overview of the interconnections between circular economy, circularity, and sustainability. Key themes, theoretical frameworks, empirical findings, and conceptual gaps in the literature were identified. Through a rigorous analysis of scholarly articles, the study highlighted the importance of integrating these concepts for a more sustainable future. This study contributes to the existing literature by integrating and linking the concepts of circular economy, circularity, and sustainability. It expands the theoretical understanding of how these concepts relate to each other and provides a conceptual framework that can guide future research in this field. The findings emphasize the need for a holistic approach in achieving sustainability goals. The data collection for this review involved identifying relevant articles from the Scopus and Web of Science databases. The selection of articles was made based on predefined inclusion and exclusion criteria. The PRISMA protocol guided the systematic analysis of the selected articles, including summarizing and systematizing their content. This study addressed the question of how circularity is conceptualized and related to both the circular economy and long-term sustainability. It aimed to identify the interconnections between these concepts and bridge the gap in the existing literature. The review provided a comprehensive analysis of the interconnections between the circular economy, circularity, and sustainability. It presented a conceptual framework that can guide practitioners in implementing circular economy strategies and serve as a basis for future research. By integrating these concepts, scholars, and professionals can enhance the theoretical and practical methods in pursuit of a more sustainable future. The findings emphasize the importance of taking a holistic approach to achieve sustainability goals and highlight conceptual gaps that can be addressed in future studies.Keywords: circularity, circular economy, sustainability, innovation
Procedia PDF Downloads 106299 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing
Authors: Jianan Sun, Ziwen Ye
Abstract:
Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection
Procedia PDF Downloads 130298 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients
Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho
Abstract:
Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper
Procedia PDF Downloads 146297 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 145296 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques
Authors: Melese Wondatir
Abstract:
Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making
Procedia PDF Downloads 70295 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents
Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat
Abstract:
This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents
Procedia PDF Downloads 70294 The Impact of Physical Exercise on Gestational Diabetes and Maternal Weight Management: A Meta-Analysis
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Physiological changes during pregnancy, such as alterations in the circulatory, respiratory, and musculoskeletal systems, can negatively impact daily physical activity. This reduced activity is often associated with an increased risk of adverse maternal health outcomes, particularly gestational diabetes mellitus (GDM) and excessive weight gain. This meta-analysis aims to evaluate the effectiveness of structured physical exercise interventions during pregnancy in reducing the risk of GDM and managing maternal weight gain. A comprehensive search was conducted across six major databases: PubMed, Cochrane Library, EMBASE, Web of Science, ScienceDirect, and ClinicalTrials.gov, covering the period from database inception until 2023. Randomized controlled trials (RCTs) that explored the effects of physical exercise programs on pregnant women with low physical activity levels were included. The search was performed using EndNote and results were managed using RevMan (Review Manager) for meta-analysis. RCTs involving healthy pregnant women with low levels of physical activity or sedentary lifestyles were selected. These RCTs must have incorporated structured exercise programs during pregnancy and reported on outcomes related to GDM and maternal weight gain. From an initial pool of 5,112 articles, 65 RCTs (involving 11,400 pregnant women) met the inclusion criteria. Data extraction was performed, followed by a quality assessment of the selected studies using the Cochrane Risk of Bias tool. The meta-analysis was conducted using RevMan software, where pooled relative risks (RR) and weighted mean differences (WMD) were calculated using a random-effects model to address heterogeneity across studies. Sensitivity analyses, subgroup analyses (based on factors such as exercise intensity, duration, and pregnancy stage), and publication bias assessments were also conducted. Structured physical exercise during pregnancy led to a significant reduction in the risk of developing GDM (RR = 0.68; P < 0.001), particularly when the exercise program was performed throughout the pregnancy (RR = 0.62; P = 0.035). In addition, maternal weight gain was significantly reduced (WMD = −1.18 kg; 95% CI −1.54 to −0.85; P < 0.001). There were no significant adverse effects reported for either the mother or the neonate, confirming that exercise interventions are safe for both. This meta-analysis highlights the positive impact of regular moderate physical activity during pregnancy in reducing the risk of GDM and managing maternal weight gain. These findings suggest that physical exercise should be encouraged as a routine part of prenatal care. However, more research is required to refine exercise recommendations and determine the most effective interventions based on individual risk factors and pregnancy stages.Keywords: gestational diabetes, maternal weight management, meta-analysis, randomized controlled trials
Procedia PDF Downloads 11293 Assumption of Cognitive Goals in Science Learning
Authors: Mihail Calalb
Abstract:
The aim of this research is to identify ways for achieving sustainable conceptual understanding within science lessons. For this purpose, a set of teaching and learning strategies, parts of the theory of visible teaching and learning (VTL), is studied. As a result, a new didactic approach named "learning by being" is proposed and its correlation with educational paradigms existing nowadays in science teaching domain is analysed. In the context of VTL the author describes the main strategies of "learning by being" such as guided self-scaffolding, structuring of information, and recurrent use of previous knowledge or help seeking. Due to the synergy effect of these learning strategies applied simultaneously in class, the impact factor of learning by being on cognitive achievement of students is up to 93 % (the benchmark level is equal to 40% when an experienced teacher applies permanently the same conventional strategy during two academic years). The key idea in "learning by being" is the assumption by the student of cognitive goals. From this perspective, the article discusses the role of student’s personal learning effort within several teaching strategies employed in VTL. The research results emphasize that three mandatory student – related moments are present in each constructivist teaching approach: a) students’ personal learning effort, b) student – teacher mutual feedback and c) metacognition. Thus, a successful educational strategy will target to achieve an involvement degree of students into the class process as high as possible in order to make them not only know the learning objectives but also to assume them. In this way, we come to the ownership of cognitive goals or students’ deep intrinsic motivation. A series of approaches are inherent to the students’ ownership of cognitive goals: independent research (with an impact factor on cognitive achievement equal to 83% according to the results of VTL); knowledge of success criteria (impact factor – 113%); ability to reveal similarities and patterns (impact factor – 132%). Although it is generally accepted that the school is a public service, nonetheless it does not belong to entertainment industry and in most of cases the education declared as student – centered actually hides the central role of the teacher. Even if there is a proliferation of constructivist concepts, mainly at the level of science education research, we have to underline that conventional or frontal teaching, would never disappear. Research results show that no modern method can replace an experienced teacher with strong pedagogical content knowledge. Such a teacher will inspire and motivate his/her students to love and learn physics. The teacher is precisely the condensation point for an efficient didactic strategy – be it constructivist or conventional. In this way, we could speak about "hybridized teaching" where both the student and the teacher have their share of responsibility. In conclusion, the core of "learning by being" approach is guided learning effort that corresponds to the notion of teacher–student harmonic oscillator, when both things – guidance from teacher and student’s effort – are equally important.Keywords: conceptual understanding, learning by being, ownership of cognitive goals, science learning
Procedia PDF Downloads 167292 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 98291 Simplified Modeling of Post-Soil Interaction for Roadside Safety Barriers
Authors: Charly Julien Nyobe, Eric Jacquelin, Denis Brizard, Alexy Mercier
Abstract:
The performance of road side safety barriers depends largely on the dynamic interactions between post and soil. These interactions play a key role in the response of barriers to crash testing. In the literature, soil-post interaction is modeled in crash test simulations using three approaches. Many researchers have initially used the finite element approach, in which the post is embedded in a continuum soil modelled by solid finite elements. This method represents a more comprehensive and detailed approach, employing a mesh-based continuum to model the soil’s behavior and its interaction with the post. Although this method takes all soil properties into account, it is nevertheless very costly in terms of simulation time. In the second approach, all the points of the post located at a predefined depth are fixed. Although this approach reduces CPU computing time, it overestimates soil-post stiffness. The third approach involves modeling the post as a beam supported by a set of nonlinear springs in the horizontal directions. For support in the vertical direction, the posts were constrained at a node at ground level. This approach is less costly, but the literature does not provide a simple procedure to determine the constitutive law of the springs The aim of this study is to propose a simple and low-cost procedure to obtain the constitutive law of nonlinear springs that model the soil-post interaction. To achieve this objective, we will first present a procedure to obtain the constitutive law of nonlinear springs thanks to the simulation of a soil compression test. The test consists in compressing the soil contained in the tank by a rigid solid, up to a vertical displacement of 200 mm. The resultant force exerted by the ground on the rigid solid and its vertical displacement are extracted and, a force-displacement curve was determined. The proposed procedure for replacing the soil with springs must be tested against a reference model. The reference model consists of a wooden post embedded into the ground and impacted with an impactor. Two simplified models with springs are studied. In the first model, called Kh-Kv model, the springs are attached to the post in the horizontal and vertical directions. The second Kh model is the one described in the literature. The two simplified models are compared with the reference model according to several criteria: the displacement of a node located at the top of the post in vertical and horizontal directions; displacement of the post's center of rotation and impactor velocity. The results given by both simplified models are very close to the reference model results. It is noticeable that the Kh-Kv model is slightly better than the Kh model. Further, the former model is more interesting than the latter as it involves less arbitrary conditions. The simplified models also reduce the simulation time by a factor 4. The Kh-Kv model can therefore be used as a reliable tool to represent the soil-post interaction in a future research and development of road safety barriers.Keywords: crash tests, nonlinear springs, soil-post interaction modeling, constitutive law
Procedia PDF Downloads 30290 Measuring Biobased Content of Building Materials Using Carbon-14 Testing
Authors: Haley Gershon
Abstract:
The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials
Procedia PDF Downloads 158289 Debriefing Practices and Models: An Integrative Review
Authors: Judson P. LaGrone
Abstract:
Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education
Procedia PDF Downloads 142288 3D Interactions in Under Water Acoustic Simulationseffect of Green Synthesized Metal Nanoparticles on Gene Expression in an In-Vitro Model of Non-alcoholic Steatohepatitis
Authors: Nendouvhada Livhuwani Portia, Nicole Sibuyi, Kwazikwakhe Gabuza, Adewale Fadaka
Abstract:
Metabolic dysfunction-associated liver disease (MASLD) is a chronic condition characterized by excessive fat accumulation in the liver, distinct from conditions caused by alcohol, viral hepatitis, or medications. MASLD is often linked with metabolic syndrome, including obesity, diabetes, hyperlipidemia, and hypertriglyceridemia. This disease can progress to metabolic dysfunction-associated steatohepatitis (MASH), marked by liver inflammation and scarring, potentially leading to cirrhosis. However, only 43-44% of patients with steatosis develop MASH, and 7-30% of those with MASH progress to cirrhosis. The exact mechanisms underlying MASLD and its progression remain unclear, and there are currently no specific therapeutic strategies for MASLD/MASH. While anti-obesity and anti-diabetic medications can reduce progression, they do not fully treat or reverse the disease. As an alternative, green-synthesized metal nanoparticles (MNPs) are emerging as potential treatments for liver diseases due to their anti-diabetic, anti-inflammatory, and anti-obesity properties with minimal side effects. MNPs like gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) have been shown to improve metabolic processes by lowering blood glucose, body fat, and inflammation. The study aimed to explore the effects of green-synthesized MNPs on gene expression in an in vitro model of MASH using C3A/HepG2 liver cells. The MASH model was created by exposing these cells to free fatty acids (FFAs) followed by lipopolysaccharide (LPS) to induce inflammation. Cell viability was assessed with the Water-Soluble Tetrazolium (WST)-1 assay, and lipid accumulation was measured using the Oil Red O (ORO) assay. Additionally, mitochondrial membrane potential was assessed by the tetramethyl rhodamine, methyl ester (TMRE) assay, and inflammation was measured with an Enzyme-Linked Immunosorbent Assay (ELISA). The study synthesized AuNPs from Carpobrotus edulis fruit (CeF) and avocado seed (AvoSE) and AgNPs from Salvia africana-lutea (SAL) using optimized conditions. The MNPs were characterized by UV-Vis spectrophotometry and Dynamic Light Scattering (DLS). The nanoparticles were tested at various concentrations for their impact on the C3A/HepG2-induced MASH model. Among the MNPs tested, AvoSE-AuNPs showed the most promise. They reduced cell proliferation and intracellular lipid content more effectively than CeFE-AuNPs and SAL-AgNPs. Molecular analysis using real-time polymerase chain reaction revealed that AvoSE-AuNPs could potentially reverse MASH effects by reducing the expression of key pro-inflammatory and metabolic genes, including tumor necrosis factor-alpha (TNF-α), Fas cell surface death receptor (FAS), Peroxisome proliferator-activated receptor (PPAR)-α, PPAR-γ, and Sterol regulatory element-binding protein (SREBPF)-1. Further research is needed to confirm the molecular mechanisms behind the effects of these MNPs and to identify the specific phytochemicals responsible for their synthesis and bioactivities.Keywords: gold nanoparticles, green nanotechnology, metal nanoparticles, obesity
Procedia PDF Downloads 25287 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review
Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed
Abstract:
Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy
Procedia PDF Downloads 132286 Healthcare Associated Infections in an Intensive Care Unit in Tunisia: Incidence and Risk Factors
Authors: Nabiha Bouafia, Asma Ben Cheikh, Asma Ammar, Olfa Ezzi, Mohamed Mahjoub, Khaoula Meddeb, Imed Chouchene, Hamadi Boussarsar, Mansour Njah
Abstract:
Background: Hospital acquired infections (HAI) cause significant morbidity, mortality, length of stay and hospital costs, especially in the intensive care unit (ICU), because of the debilitated immune systems of their patients and exposure to invasive devices. The aims of this study were to determine the rate and the risk factors of HAI in an ICU of a university hospital in Tunisia. Materials/Methods: A prospective study was conducted in the 8-bed adult medical ICU of a University Hospital (Sousse Tunisia) during 14 months from September 15th, 2015 to November 15th, 2016. Patients admitted for more than 48h were included. Their surveillance was stopped after the discharge from ICU or death. HAIs were defined according to standard Centers for Disease Control and Prevention criteria. Risk factors were analyzed by conditional stepwise logistic regression. The p-value of < 0.05 was considered significant. Results: During the study, 192 patients had admitted for more than 48 hours. Their mean age was 59.3± 18.20 years and 57.1% were male. Acute respiratory failure was the main reason of admission (72%). The mean SAPS II score calculated at admission was 32.5 ± 14 (range: 6 - 78). The exposure to the mechanical ventilation (MV) and the central venous catheter were observed in 169 (88 %) and 144 (75 %) patients, respectively. Seventy-three patients (38.02%) developed 94 HAIs. The incidence density of HAIs was 41.53 per 1000 patient day. Mortality rate in patients with HAIs was 65.8 %( n= 48). Regarding the type of infection, Ventilator Associated Pneumoniae (VAP) and central venous catheter Associated Infections (CVC AI) were the most frequent with Incidence density: 14.88/1000 days of MV for VAP and 20.02/1000 CVC days for CVC AI. There were 5 Peripheral Venous Catheter Associated Infections, 2 urinary tract infections, and 21 other HAIs. Gram-negative bacteria were the most common germs identified in HAIs: Multidrug resistant Acinetobacter Baumanii (45%) and Klebsiella pneumoniae (10.96%) were the most frequently isolated. Univariate analysis showed that transfer from another hospital department (p= 0.001), intubation (p < 10-4), tracheostomy (p < 10-4), age (p=0.028), grade of acute respiratory failure (p=0.01), duration of sedation (p < 10-4), number of CVC (p < 10-4), length of mechanical ventilation (p < 10-4) and length of stay (p < 10-4), were associated to high risk of HAIS in ICU. Multivariate analysis reveals that independent risk factors for HAIs are: transfer from another hospital department: OR=13.44, IC 95% [3.9, 44.2], p < 10-4, duration of sedation: OR= 1.18, IC 95% [1.049, 1.325], p=0.006, high number of CVC: OR=2.78, IC 95% [1.73, 4.487], p < 10-4, and length of stay in ICU: OR= 1.14, IC 95% [1.066,1.22], p < 10-4. Conclusion: Prevention of nosocomial infections in ICUs is a priority of health care systems all around the world. Yet, their control requires an understanding of epidemiological data collected in these units.Keywords: healthcare associated infections, incidence, intensive care unit, risk factors
Procedia PDF Downloads 369