Search results for: higher education students’ performance
688 Enhancing Air Quality: Investigating Filter Lifespan and Byproducts in Air Purification Solutions
Authors: Freja Rydahl Rasmussen, Naja Villadsen, Stig Koust
Abstract:
Air purifiers have become widely implemented in a wide range of settings, including households, schools, institutions, and hospitals, as they tackle the pressing issue of indoor air pollution. With their ability to enhance indoor air quality and create healthier environments, air purifiers are particularly vital when ventilation options are limited. These devices incorporate a diverse array of technologies, including HEPA filters, active carbon filters, UV-C light, photocatalytic oxidation, and ionizers, each designed to combat specific pollutants and improve air quality within enclosed spaces. However, the safety of air purifiers has not been investigated thoroughly, and many questions still arise when applying them. Certain air purification technologies, such as UV-C light or ionization, can unintentionally generate undesirable byproducts that can negatively affect indoor air quality and health. It is well-established that these technologies can inadvertently generate nanoparticles or convert common gaseous compounds into harmful ones, thus exacerbating air pollution. However, the formation of byproducts can vary across products, necessitating further investigation. There is a particular concern about the formation of the carcinogenic substance formaldehyde from common gases like acetone. Many air purifiers use mechanical filtration to remove particles, dust, and pollen from the air. Filters need to be replaced periodically for optimal efficiency, resulting in an additional cost for end-users. Currently, there are no guidelines for filter lifespan, and replacement recommendations solely rely on manufacturers. A market screening revealed that manufacturers' recommended lifespans vary greatly (from 1 month to 10 years), and there is a need for general recommendations to guide consumers. Activated carbon filters are used to adsorb various types of chemicals that can pose health risks or cause unwanted odors. These filters have a certain capacity before becoming saturated. If not replaced in a timely manner, the adsorbed substances are likely to be released from the filter through off-gassing or losing adsorption efficiency. The goal of this study is to investigate the lifespan of filters as well as investigate the potentially harmful effects of air purifiers. Understanding the lifespan of filters used in air purifiers and the potential formation of harmful byproducts is essential for ensuring their optimal performance, guiding consumers in their purchasing decisions, and establishing industry standards for safer and more effective air purification solutions. At this time, a selection of air purifiers has been chosen, and test methods have been established. In the following 3 months, the tests will be conducted, and the results will be ready for presentation later.Keywords: air purifiers, activated carbon filters, byproducts, clean air, indoor air quality
Procedia PDF Downloads 70687 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 227686 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 331685 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 233684 Argos System: Improvements and Future of the Constellation
Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard
Abstract:
Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services
Procedia PDF Downloads 179683 Experiences of Discrimination and Coping Strategies of Second Generation Academics during the Career-Entry Phase in Austria
Authors: R. Verwiebe, L. Seewann, M. Wolf
Abstract:
This presentation addresses marginalization and discrimination as experienced by young academics with a migrant background in the Austrian labor market. Focusing on second generation academics of Central Eastern European and Turkish descent we explore two major issues. First, we ask whether their career-entry and everyday professional life entails origin-specific barriers. As educational residents, they show competences which, when lacking, tend to be drawn upon to explain discrimination: excellent linguistic skills, accredited high-level training, and networks. Second, we concentrate on how this group reacts to discrimination and overcomes experiences of marginalization. To answer these questions, we utilize recent sociological and social psychological theories that focus on the diversity of individual experiences. This distinguishes us from a long tradition of research that has dealt with the motives that inform discrimination, but has less often considered the effects on those concerned. Similarly, applied coping strategies have less often been investigated, though they may provide unique insights into current problematic issues. Building upon present literature, we follow recent discrimination research incorporating the concepts of ‘multiple discrimination’, ‘subtle discrimination’, and ‘visual social markers’. 21 problem-centered interviews are the empirical foundation underlying this study. The interviewees completed their entire educational career in Austria, graduated in different universities and disciplines and are working in their first post-graduate jobs (career entry phase). In our analysis, we combined thematic charting with a coding method. The results emanating from our empirical material indicated a variety of discrimination experiences ranging from barely perceptible disadvantages to directly articulated and overt marginalization. The spectrum of experiences covered stereotypical suppositions at job interviews, the disavowal of competencies, symbolic or social exclusion by new colleges, restricted professional participation (e.g. customer contact) and non-recruitment due to religious or ethnical markers (e.g. headscarves). In these experiences the role of the academics education level, networks, or competences seemed to be minimal, as negative prejudice on the basis of visible ‘social markers’ operated ‘ex-ante’. The coping strategies identified in overcoming such barriers are: an increased emphasis on effort, avoidance of potentially marginalizing situations, direct resistance (mostly in the form of verbal opposition) and dismissal of negative experiences by ignoring or ironizing the situation. In some cases, the academics drew into their specific competences, such as an intellectual approach of studying specialist literature, focus on their intercultural competences or planning to migrate back to their parent’s country of origin. Our analysis further suggests a distinction between reactive (i.e. to act on and respond to experienced discrimination) and preventative strategies (applied to obviate discrimination) of coping. In light of our results, we would like to stress that the tension between educational and professional success experienced by academics with a migrant background – and the barriers and marginalization they continue to face – are essential issues to be introduced to socio-political discourse. It seems imperative to publicly accentuate the growing social, political and economic significance of this group, their educational aspirations, as well as their experiences of achievement and difficulties.Keywords: coping strategies, discrimination, labor market, second generation university graduates
Procedia PDF Downloads 221682 The Recorded Interaction Task: A Validation Study of a New Observational Tool to Assess Mother-Infant Bonding
Authors: Hannah Edwards, Femke T. A. Buisman-Pijlman, Adrian Esterman, Craig Phillips, Sandra Orgeig, Andrea Gordon
Abstract:
Mother-infant bonding is a term which refers to the early emotional connectedness between a mother and her infant. Strong mother-infant bonding promotes higher quality mother and infant interactions including prolonged breastfeeding, secure attachment and increased sensitive parenting and maternal responsiveness. Strengthening of all such interactions leads to improved social behavior, and emotional and cognitive development throughout childhood, adolescence and adulthood. The positive outcomes observed following strong mother-infant bonding emphasize the need to screen new mothers for disrupted mother-infant bonding, and in turn the need for a robust, valid tool to assess mother-infant bonding. A recent scoping review conducted by the research team identified four tools to assess mother-infant bonding, all of which employed self-rating scales. Thus, whilst these tools demonstrated both adequate validity and reliability, they rely on self-reported information from the mother. As such this may reflect a mother’s perception of bonding with their infant, rather than their actual behavior. Therefore, a new tool to assess mother-infant bonding has been developed. The Recorded Interaction Task (RIT) addresses shortcomings of previous tools by employing observational methods to assess bonding. The RIT focusses on the common interaction between mother and infant of changing a nappy, at the target age of 2-6 months, which is visually recorded and then later assessed. Thirteen maternal and seven infant behaviors are scored on the RIT Observation Scoring Sheet, and a final combined score of mother-infant bonding is determined. The aim of the current study was to assess the content validity and inter-rater reliability of the RIT. A panel of six experts with specialized expertise in bonding and infant behavior were consulted. Experts were provided with the RIT Observation Scoring Sheet, a visual recording of a nappy change interaction, and a feedback form. Experts scored the mother and infant interaction on the RIT Observation Scoring Sheet and completed the feedback form which collected their opinions on the validity of each item on the RIT Observation Scoring Sheet and the RIT as a whole. Twelve of the 20 items on the RIT Observation Scoring Sheet were scored ‘Valid’ by all (n=6) or most (n=5) experts. Two items received a ‘Not valid’ score from one expert. The remainder of the items received a mixture of ‘Valid’ and ‘Potentially Valid’ scores. Few changes were made to the RIT Observation Scoring Sheet following expert feedback, including rewording of items for clarity and the exclusion of an item focusing on behavior deemed not relevant for the target infant age. The overall ICC for single rater absolute agreement was 0.48 (95% CI 0.28 – 0.71). Experts (n=6) ratings were less consistent for infant behavior (ICC 0.27 (-0.01 – 0.82)) compared to mother behavior (ICC 0.55 (0.28 – 0.80)). Whilst previous tools employ self-report methods to assess mother-infant bonding, the RIT utilizes observational methods. The current study highlights adequate content validity and moderate inter-rater reliability of the RIT, supporting its use in future research. A convergent validity study comparing the RIT against an existing tool is currently being undertaken to confirm these results.Keywords: content validity, inter-rater reliability, mother-infant bonding, observational tool, recorded interaction task
Procedia PDF Downloads 180681 Identifying Risk Factors for Readmission Using Decision Tree Analysis
Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka
Abstract:
This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.Keywords: decision tree, hospital, internal medicine, readmission
Procedia PDF Downloads 256680 Effect of Non-Thermal Plasma, Chitosan and Polymyxin B on Quorum Sensing Activity and Biofilm of Pseudomonas aeruginosa
Authors: Alena Cejkova, Martina Paldrychova, Jana Michailidu, Olga Matatkova, Jan Masak
Abstract:
Increasing the resistance of pathogenic microorganisms to many antibiotics is a serious threat to the treatment of infectious diseases and cleaning medical instruments. It should be added that the resistance of microbial populations growing in biofilms is often up to 1000 times higher compared to planktonic cells. Biofilm formation in a number of microorganisms is largely influenced by the quorum sensing regulatory mechanism. Finding external factors such as natural substances or physical processes that can interfere effectively with quorum sensing signal molecules should reduce the ability of the cell population to form biofilm and increase the effectiveness of antibiotics. The present work is devoted to the effect of chitosan as a representative of natural substances with anti-biofilm activity and non- thermal plasma (NTP) alone or in combination with polymyxin B on biofilm formation of Pseudomonas aeruginosa. Particular attention was paid to the influence of these agents on the level of quorum sensing signal molecules (acyl-homoserine lactones) during planktonic and biofilm cultivations. Opportunistic pathogenic strains of Pseudomonas aeruginosa (DBM 3081, DBM 3777, ATCC 10145, ATCC 15442) were used as model microorganisms. Cultivations of planktonic and biofilm populations in 96-well microtiter plates on horizontal shaker were used for determination of antibiotic and anti-biofilm activity of chitosan and polymyxin B. Biofilm-growing cells on titanium alloy, which is used for preparation of joint replacement, were exposed to non-thermal plasma generated by cometary corona with a metallic grid for 15 and 30 minutes. Cultivation followed in fresh LB medium with or without chitosan or polymyxin B for next 24 h. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Activity of N-acyl homoserine lactones (AHLs) compounds involved in the regulation of biofilm formation was determined using Agrobacterium tumefaciens strain harboring a traG::lacZ/traR reporter gene responsive to AHLs. The experiments showed that both chitosan and non-thermal plasma reduce the AHLs level and thus the biofilm formation and stability. The effectiveness of both agents was somewhat strain dependent. During the eradication of P. aeruginosa DBM 3081 biofilm on titanium alloy induced by chitosan (45 mg / l) there was an 80% decrease in AHLs. Applying chitosan or NTP on the P. aeruginosa DBM 3777 biofilm did not cause a significant decrease in AHLs, however, in combination with both (chitosan 55 mg / l and NTP 30 min), resulted in a 70% decrease in AHLs. Combined application of NTP and polymyxin B allowed reduce antibiotic concentration to achieve the same level of AHLs inhibition in P. aeruginosa ATCC 15442. The results shown that non-thermal plasma and chitosan have considerable potential for the eradication of highly resistant P. aeruginosa biofilms, for example on medical instruments or joint implants.Keywords: anti-biofilm activity, chitosan, non-thermal plasma, opportunistic pathogens
Procedia PDF Downloads 199679 Investigating the Thermal Comfort Properties of Mohair Fabrics
Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman
Abstract:
Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance
Procedia PDF Downloads 139678 Efficacy of Pooled Sera in Comparison with Commercially Acquired Quality Control Sample for Internal Quality Control at the Nkwen District Hospital Laboratory
Authors: Diom Loreen Ndum, Omarine Njimanted
Abstract:
With increasing automation in clinical laboratories, the requirements for quality control materials have greatly increased in order to monitor daily performance. The constant use of commercial control material is not economically feasible for many developing countries because of non-availability or the high-cost of the materials. Therefore, preparation and use of in-house quality control serum will be a very cost-effective measure with respect to laboratory needs.The objective of this study was to determine the efficacy of in-house prepared pooled sera with respect to commercially acquired control sample for routine internal quality control at the Nkwen District Hospital Laboratory. This was an analytical study, serum was taken from leftover serum samples of 5 healthy adult blood donors at the blood bank of Nkwen District Hospital, which had been screened negative for human immunodeficiency virus (HIV), hepatitis C virus (HCV) and Hepatitis B antigens (HBsAg), and were pooled together in a sterile container. From the pooled sera, sixty aliquots of 150µL each were prepared. Forty aliquots of 150µL each of commercially acquired samples were prepared after reconstitution and stored in a deep freezer at − 20°C until it was required for analysis. This study started from the 9th June to 12th August 2022. Every day, alongside with commercial control sample, one aliquot of pooled sera was removed from the deep freezer and allowed to thaw before analyzed for the following parameters: blood urea, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), potassium and sodium. After getting the first 20 values for each parameter of pooled sera, the mean, standard deviation and coefficient of variation were calculated, and a Levey-Jennings (L-J) chart established. The mean and standard deviation for commercially acquired control sample was provided by the manufacturer. The following results were observed; pooled sera had lesser standard deviation for creatinine, urea and AST than commercially acquired control samples. There was statistically significant difference (p<0.05) between the mean values of creatinine, urea and AST for in-house quality control when compared with commercial control. The coefficient of variation for the parameters for both commercial control and in-house control samples were less than 30%, which is an acceptable difference. The L-J charts revealed shifts and trends (warning signs), so troubleshooting and corrective measures were taken. In conclusion, in-house quality control sample prepared from pooled serum can be a good control sample for routine internal quality control.Keywords: internal quality control, levey-jennings chart, pooled sera, shifts, trends, westgard rules
Procedia PDF Downloads 76677 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 207676 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 122675 First-Trimester Screening of Preeclampsia in a Routine Care
Authors: Tamar Grdzelishvili, Zaza Sinauridze
Abstract:
Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein
Procedia PDF Downloads 75674 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals
Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly
Abstract:
Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery
Procedia PDF Downloads 316673 Co-pyrolysis of Sludge and Kaolin/Zeolite to Stabilize Heavy Metals
Authors: Qian Li, Zhaoping Zhong
Abstract:
Sewage sludge, a typical solid waste, has inevitably been produced in enormous quantities in China. Still worse, the amount of sewage sludge produced has been increasing due to rapid economic development and urbanization. Compared to the conventional method to treat sewage sludge, pyrolysis has been considered an economic and ecological technology because it can significantly reduce the sludge volume, completely kill pathogens, and produce valuable solid, gas, and liquid products. However, the large-scale utilization of sludge biochar has been limited due to the considerable risk posed by heavy metals in the sludge. Heavy metals enriched in pyrolytic biochar could be divided into exchangeable, reducible, oxidizable, and residual forms. The residual form of heavy metals is the most stable and cannot be used by organisms. Kaolin and zeolite are environmentally friendly inorganic minerals with a high surface area and heat resistance characteristics. So, they exhibit the enormous potential to immobilize heavy metals. In order to reduce the risk of leaching heavy metals in the pyrolysis biochar, this study pyrolyzed sewage sludge mixed with kaolin/zeolite in a small rotary kiln. The influences of additives and pyrolysis temperature on the leaching concentration and morphological transformation of heavy metals in pyrolysis biochar were investigated. The potential mechanism of stabilizing heavy metals in the co-pyrolysis of sludge blended with kaolin/zeolite was explained by scanning electron microscopy, X-ray diffraction, and specific surface area and porosity analysis. The European Community Bureau of Reference sequential extraction procedure has been applied to analyze the forms of heavy metals in sludge and pyrolysis biochar. All the concentrations of heavy metals were examined by flame atomic absorption spectrophotometry. Compared with the proportions of heavy metals associated with the F4 fraction in pyrolytic carbon prepared without additional agents, those in carbon obtained by co-pyrolysis of sludge and kaolin/zeolite increased. Increasing the additive dosage could improve the proportions of the stable fraction of various heavy metals in biochar. Kaolin exhibited a better effect on stabilizing heavy metals than zeolite. Aluminosilicate additives with excellent adsorption performance could capture more released heavy metals during sludge pyrolysis. Then heavy metal ions would react with the oxygen ions of additives to form silicate and aluminate, causing the conversion of heavy metals from unstable fractions (sulfate, chloride, etc.) to stable fractions (silicate, aluminate, etc.). This study reveals that the efficiency of stabilizing heavy metals depends on the formation of stable mineral compounds containing heavy metals in pyrolysis biochar.Keywords: co-pyrolysis, heavy metals, immobilization mechanism, sewage sludge
Procedia PDF Downloads 65672 Contamination by Heavy Metals of Some Environmental Objects in Adjacent Territories of Solid Waste Landfill
Authors: D. Kekelidze, G. Tsotadze, G. Maisuradze, L. Akhalbedashvili, M. Chkhaidze
Abstract:
Statement of Problem: The problem of solid wastes -dangerous sources of environmental pollution,is the urgent issue for Georgia as there are no waste-treatment and waste- incineration plants. Urban peripheral and rural areas, frequently along small rivers, are occupied by landfills without any permission. The study of the pollution of some environmental objects in the adjacent territories of solid waste landfill in Tbilisi carried out in 2020-2021, within the framework of project: “Ecological monitoring of the landfills surrounding areas and population health risk assessment”. Research objects: This research had goal to assess the ecological state of environmental objects (soil cover and surface water) in the territories, adjacent of solid waste landfill, on the base of changes heavy metals' (HM) concentration with distance from landfill. An open sanitary landfill for solid domestic waste in Tbilisi locates at suburb Lilo surrounded with densely populated villages. Content of following HM was determined in soil and river water samples: Pb, Cd, Cu, Zn, Ni, Co, Mn. Methodology: The HM content in samples was measured, using flame atomic absorption spectrophotometry (spectrophotometer of firm Perkin-Elmer AAnalyst 200) in accordance with ISO 11466 and GOST Р 53218-2008. Results and discussion: Data obtained confirmed migration of HM mainly in terms of the distance from the polygon that can be explained by their areal emissions and storage in open state, they could also get into the soil cover under the influence of wind and precipitation. Concentration of Pb, Cd, Cu, Zn always increases with approaching to landfill. High concentrations of Pb, Cd are characteristic of the soil covers of the adjacent territories around the landfill at a distance of 250, 500 meters.They create a dangerous zone, since they can later migrate into plants, enter in rivers and lakes. The higher concentrations, compared to the maximum permissible concentrations (MPC) for surface waters of Georgia, are observed for Pb, Cd. One of the reasons for the low concentration of HM in river water may be high turbidity – as is known, suspended particles are good natural sorbents that causes low concentration of dissolved forms. Concentration of Cu, Ni, Mn increases in winter, since in this season the rivers are switched to groundwater feeding. Conclusion: Soil covers of the areas adjacent to the landfill in Lilo are contaminated with HM. High concentrations in soils are characteristic of lead and cadmium. Elevated concentrations in comparison with the MPC for surface waters adopted in Georgia are also observed for Pb, Cd at checkpoints along and below (1000 m) of the landfill downstream. Data obtained confirm migration of HM to the adjacent territories of the landfill and to the Lochini River. Since the migration and toxicity of metals depends also on the presence of their mobile forms in water bodies, samples of bottom sediments should be taken too. Bottom sediments reflect a long-term picture of pollution, they accumulate HM and represent a constant source of secondary pollution of water bodies. The study of the physicochemical forms of metals is one of the priority areas for further research.Keywords: landfill, pollution, heavy metals, migration
Procedia PDF Downloads 99671 Catalytic Dehydrogenation of Formic Acid into H2/CO2 Gas: A Novel Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of biomass platform, comprising a potential pool of hydrogen energy that stands as a new energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need of in-situ H2 production, which plays a key role in the hydrogenation reactions of biomass into higher value components. It is reported elsewhere in literature that catalytic decomposition of FA is usually performed in poorly designed setup using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. it work suggests an approach that integrates designing a novel catalyst featuring magnetic property with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H2 gas from FA. Using ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under inert medium. Through a novel approach, FA is charged into the reactor via high-pressure positive displacement pump at steady state conditions. The produced gas (H2+CO2) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The novelty of this work lies in designing a very responsive catalyst, pumping consistent amount of FA into a sealed reactor running at steady state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at lower temperature range (35-50°C) yielded more gas while the catalyst loading and Pd doping wt.% were found to be the most significant factors with a P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 50670 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand
Authors: Wen Liu, Errol Haarhoff, Lee Beattie
Abstract:
In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.Keywords: urban intensification, sustainable development, plan making, governance and implementation
Procedia PDF Downloads 555669 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention
Authors: Thomas Larm, Anna Wahlsten
Abstract:
An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch
Procedia PDF Downloads 85668 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades
Authors: Chi Zhang, Hua-Peng Chen
Abstract:
With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.
Procedia PDF Downloads 412667 A Study of Kinematical Parameters I9N Instep Kicking in Soccer
Authors: Abdolrasoul Daneshjoo
Abstract:
Introduction: Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill. Purpose: The aim of the present study was to study of a few kinematical parameters in instep kicking from 3 and 5 meter distance among the male and female elite soccer players. Methods: 24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 500 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders. Results and Discussion: Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact. Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.Keywords: biomechanics, kinematics, soccer, instep kick, male, female
Procedia PDF Downloads 414666 Endometrial Ablation and Resection Versus Hysterectomy for Heavy Menstrual Bleeding: A Systematic Review and Meta-Analysis of Effectiveness and Complications
Authors: Iliana Georganta, Clare Deehan, Marysia Thomson, Miriam McDonald, Kerrie McNulty, Anna Strachan, Elizabeth Anderson, Alyaa Mostafa
Abstract:
Context: A meta-analysis of randomized controlled trials (RCTs) comparing hysterectomy versus endometrial ablation and resection in the management of heavy menstrual bleeding. Objective: To evaluate the clinical efficacy, satisfaction rates and adverse events of hysterectomy compared to more minimally invasive techniques in the treatment of HMB. Evidence Acquisition: A literature search was performed for all RCTs and quasi-RCTs comparing hysterectomy with either endometrial ablation endometrial resection of both. The search had no language restrictions and was last updated in June 2020 using MEDLINE, EMBASE, Cochrane Central Register of Clinical Trials, PubMed, Google Scholar, PsycINFO, Clinicaltrials.gov and Clinical trials. EU. In addition, a manual search of the abstract databases of the European Haemophilia Conference on women's health was performed and further studies were identified from references of acquired papers. The primary outcomes were patient-reported and objective reduction in heavy menstrual bleeding up to 2 years and after 2 years. Secondary outcomes included satisfaction rates, pain, adverse events short and long term, quality of life and sexual function, further surgery, duration of surgery and hospital stay and time to return to work and normal activities. Data were analysed using RevMan software. Evidence synthesis: 12 studies and a total of 2028 women were included (hysterectomy: n = 977 women vs endometrial ablation or resection: n = 1051 women). Hysterectomy was compared with endometrial ablation only in five studies (Lin, Dickersin, Sesti, Jain, Cooper) and endometrial resection only in five studies (Gannon, Schulpher, O’Connor, Crosignani, Zupi) and a mixture of the Ablation and Resection in two studies (Elmantwe, Pinion). Of the 1² studies, 10 reported women’s perception of bleeding symptoms as improved. Meta-analysis showed that women in the hysterectomy group were more likely to show improvement in bleeding symptoms when compared with endometrial ablation or resection up to 2-year follow-up (RR 0.75, 95% CI 0.71 to 0.79, I² = 95%). Objective outcomes of improvement in bleeding also favored hysterectomy. Patient satisfaction was higher after hysterectomy within the 2 years follow-up (RR: 0.90, 95%CI: 0.86 to 0.94, I²:58%), however, there was no significant difference between the two groups at more than 2 years follow up. Sepsis (RR: 0.03, 95% CI 0.002 to 0.56; 1 study), wound infection (RR: 0.05, 95% CI: 0.01 to 0.28, I²: 0%, 3 studies) and Urinary tract infection (UTI) (RR: 0.20, 95% CI: 0.10 to 0.42, I²: 0%, 4 studies) all favoured hysteroscopic techniques. Fluid overload (RR: 7.80, 95% CI: 2.16 to 28.16, I² :0%, 4 studies) and perforation (RR: 5.42, 95% CI: 1.25 to 23.45, I²: 0%, 4 studies) however favoured hysterectomy in the short term. Conclusions: This meta-analysis has demonstrated that endometrial ablation and endometrial resection are both viable options when compared with hysterectomy for the treatment of heavy menstrual bleeding. Hysteroscopic procedures had better outcomes in the short term with fewer adverse events including wound infection, UTI and sepsis. The hysterectomy performed better when measuring more long-term impacts such as recurrence of symptoms, overall satisfaction at two years and the need for further treatment or surgery.Keywords: menorrhagia, hysterectomy, ablation, resection
Procedia PDF Downloads 154665 Saline Aspiration Negative Intravascular Test: Mitigating Risk with Injectable Fillers
Authors: Marcelo Lopes Dias Kolling, Felipe Ferreira Laranjeira, Guilherme Augusto Hettwer, Pedro Salomão Piccinini, Marwan Masri, Carlos Oscar Uebel
Abstract:
Introduction: Injectable fillers are among the most common nonsurgical cosmetic procedures, with significant growth yearly. Knowledge of rheological and mechanical characteristics of fillers, facial anatomy, and injection technique is essential for safety. Concepts such as the use of cannula versus needle, aspiration before injection, and facial danger zones have been well discussed. In case of an accidental intravascular puncture, the pressure inside the vessel may not be sufficient to push blood into the syringe due to the characteristics of the filler product; this is especially true for calcium hydroxyapatite (CaHA) or hyaluronic acid (HA) fillers with high G’. Since viscoelastic properties of normal saline are much lower than those of fillers, aspiration with saline prior to filler injection may decrease the risk of a false negative aspiration and subsequent catastrophic effects. We discuss a technique to add an additional safety step to the procedure with saline aspiration prior to injection, a ‘’reverse Seldinger’’ technique for intravascular access, which we term SANIT: Saline Aspiration Negative Intravascular Test. Objectives: To demonstrate the author’s (PSP) technique which adds an additional safety step to the process of filler injection, with both CaHA and HA, in order to decrease the risk of intravascular injection. Materials and Methods: Normal skin cleansing and topical anesthesia with prilocaine/lidocaine cream are performed; the facial subunits to be treated are marked. A 3mL Luer lock syringe is filled with 2mL of 0.9% normal saline and a 27G needle, which is turned one half rotation. When a cannula is to be used, the Luer lock syringe is attached to a 27G 4cm single hole disposable cannula. After skin puncture, the 3mL syringe is advanced with the plunger pulled back (negative pressure). Progress is made to the desired depth, all the while aspirating. Once the desired location of filler injection is reached, the syringe is exchanged for the syringe containing a filler, securely grabbing the hub of the needle and taking care to not dislodge the needle tip. Prior to this, we remove 0.1mL of filler to allow for space inside the syringe for aspiration. We again aspirate and inject retrograde. SANIT is especially useful for CaHA, since the G’ is much higher than HA, and thus reflux of blood into the syringe is less likely to occur. Results: The technique has been used safely for the past two years with no adverse events; the increase in cost is negligible (only the cost of 2mL of normal saline). Over 100 patients (over 300 syringes) have been treated with this technique. The risk of accidental intravascular puncture has been calculated to be between 1:6410 to 1:40882 syringes among expert injectors; however, the consequences of intravascular injection can be catastrophic even with board-certified physicians. Conclusions: While the risk of intravascular filler injection is low, the consequences can be disastrous. We believe that adding the SANIT technique can help further mitigate risk with no significant untoward effects and could be considered by all performing injectable fillers. Further follow-up is ongoing.Keywords: injectable fillers, safety, saline aspiration, injectable filler complications, hyaluronic acid, calcium hydroxyapatite
Procedia PDF Downloads 149664 Increasing Prevalence of Multi-Allergen Sensitivities in Patients with Allergic Rhinitis and Asthma in Eastern India
Authors: Sujoy Khan
Abstract:
There is a rising concern with increasing allergies affecting both adults and children in rural and urban India. Recent report on adults in a densely populated North Indian city showed sensitization rates for house dust mite, parthenium, and cockroach at 60%, 40% and 18.75% that is now comparable to allergy prevalence in cities in the United States. Data from patients residing in the eastern part of India is scarce. A retrospective study (over 2 years) was done on patients with allergic rhinitis and asthma where allergen-specific IgE levels were measured to see the aero-allergen sensitization pattern in a large metropolitan city of East India. Total IgE and allergen-specific IgE levels were measured using ImmunoCAP (Phadia 100, Thermo Fisher Scientific, Sweden) using region-specific aeroallergens: Dermatophagoides pteronyssinus (d1); Dermatophagoides farinae (d2); cockroach (i206); grass pollen mix (gx2) consisted of Cynodon dactylon, Lolium perenne, Phleum pratense, Poa pratensis, Sorghum halepense, Paspalum notatum; tree pollen mix (tx3) consisted of Juniperus sabinoides, Quercus alba, Ulmus americana, Populus deltoides, Prosopis juliflora; food mix 1 (fx1) consisted of Peanut, Hazel nut, Brazil nut, Almond, Coconut; mould mix (mx1) consisted of Penicillium chrysogenum, Cladosporium herbarum, Aspergillus fumigatus, Alternaria alternate; animal dander mix (ex1) consisted of cat, dog, cow and horse dander; and weed mix (wx1) consists of Ambrosia elatior, Artemisia vulgaris, Plantago lanceolata, Chenopodium album, Salsola kali, following manufacturer’s instructions. As the IgE levels were not uniformly distributed, median values were used to represent the data. 92 patients with allergic rhinitis and asthma (united airways disease) were studied over 2 years including 21 children (age < 12 years) who had total IgE and allergen-specific IgE levels measured. The median IgE level was higher in 2016 than in 2015 with 60% of patients (adults and children) being sensitized to house dust mite (dual positivity for Dermatophagoides pteronyssinus and farinae). Of 11 children in 2015, whose total IgE ranged from 16.5 to >5000 kU/L, 36% of children were polysensitized (≥4 allergens), and 55% were sensitized to dust mites. Of 10 children in 2016, total IgE levels ranged from 37.5 to 2628 kU/L, and 20% were polysensitized with 60% sensitized to dust mites. Mould sensitivity was 10% in both of the years in the children studied. A consistent finding was that ragweed sensitization (molecular homology to Parthenium hysterophorus) appeared to be increasing across all age groups, and throughout the year, as reported previously by us where 25% of patients were sensitized. In the study sample overall, sensitizations to dust mite, cockroach, and parthenium were important risks in our patients with moderate to severe asthma that reinforces the importance of controlling indoor exposure to these allergens. Sensitizations to dust mite, cockroach and parthenium allergens are important predictors of asthma morbidity not only among children but also among adults in Eastern India.Keywords: aAeroallergens, asthma, dust mite, parthenium, rhinitis
Procedia PDF Downloads 198663 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 170662 Investigation of Mass Transfer for RPB Distillation at High Pressure
Authors: Amiza Surmi, Azmi Shariff, Sow Mun Serene Lock
Abstract:
In recent decades, there has been a significant emphasis on the pivotal role of Rotating Packed Beds (RPBs) in absorption processes, encompassing the removal of Volatile Organic Compounds (VOCs) from groundwater, deaeration, CO2 absorption, desulfurization, and similar critical applications. The primary focus is elevating mass transfer rates, enhancing separation efficiency, curbing power consumption, and mitigating pressure drops. Additionally, substantial efforts have been invested in exploring the adaptation of RPB technology for offshore deployment. This comprehensive study delves into the intricacies of nitrogen removal under low temperature and high-pressure conditions, employing the high gravity principle via innovative RPB distillation concept with a specific emphasis on optimizing mass transfer. Based on the author's knowledge and comprehensive research, no cryogenic experimental testing was conducted to remove nitrogen via RPB. The research identifies pivotal process control factors through meticulous experimental testing, with pressure, reflux ratio, and reboil ratio emerging as critical determinants in achieving the desired separation performance. The results are remarkable, with nitrogen removal reaching less than one mole% in the Liquefied Natural Gas (LNG) product and less than three moles% methane in the nitrogen-rich gas stream. The study further unveils the mass transfer coefficient, revealing a noteworthy trend of decreasing Number of Transfer Units (NTU) and Area of Transfer Units (ATU) as the rotational speed escalates. Notably, the condenser and reboiler impose varying demands based on the operating pressure, with lower pressures at 12 bar requiring a more substantial duty than the 15-bar operation of the RPB. In pursuit of optimal energy efficiency, a meticulous sensitivity analysis is conducted, pinpointing the ideal combination of pressure and rotating speed that minimizes overall energy consumption. These findings underscore the efficiency of the RPB distillation approach in effecting efficient separation, even when operating under the challenging conditions of low temperature and high pressure. This achievement is attributed to a rigorous process control framework that diligently manages the operational pressure and temperature profile of the RPB. Nonetheless, the study's conclusions point towards the need for further research to address potential scaling challenges and associated risks, paving the way for the industrial implementation of this transformative technology.Keywords: mass transfer coefficient, nitrogen removal, liquefaction, rotating packed bed
Procedia PDF Downloads 49661 Psychological Functioning of Youth Experiencing Community and Collective Violence in Post-conflict Northern Ireland
Authors: Teresa Rushe, Nicole Devlin, Tara O Neill
Abstract:
In this study, we sought to examine associations between childhood experiences of community and collective violence and psychological functioning in young people who grew up in post-conflict Northern Ireland. We hypothesized that those who grew up with such experiences would demonstrate internalizing and externalizing difficulties in early adulthood and, furthermore, that these difficulties would be mediated by adverse childhood experiences occurring within the home environment. As part of the Northern Ireland Childhood Adversity Study, we recruited 213 young people aged 18-25 years (108 males) who grew up in the post-conflict society of Northern Ireland using purposive sampling. Participants completed a digital questionnaire to measure adverse childhood experiences as well as aspects of psychological functioning. We employed the Adverse Childhood Experience -International Questionnaire (ACE-IQ¬) adaptation of the original Adverse Childhood Experiences Questionnaire (ACE) as it additionally measured aspects of witnessing community violence (e.g., seeing someone being beaten/killed, fights) and experiences of collective violence (e.g., war, terrorism, police, or gangs’ battles exposure) during the first 18 years of life. 51% of our sample reported experiences of community and/or collective violence (N=108). Compared to young people with no such experiences (N=105), they also reported significantly more adverse experiences indicative of household dysfunction (e.g., family substance misuse, mental illness or domestic violence in the family, incarceration of a family member) but not more experiences of abuse or neglect. As expected, young people who grew up with the community and/or collective violence reported significantly higher anxiety and depression scores and were more likely to engage in acts of deliberate self-harm (internalizing symptoms). They also started drinking and taking drugs at a younger age and were significantly more likely to have been in trouble with the police (externalizing symptoms). When the type of violence exposure was separated by whether the violence was witnessed (community violence) or more directly experienced (collective violence), we found community and collective violence to have similar effects on externalizing symptoms, but for internalizing symptoms, we found evidence of a differential effect. Collective violence was associated with depressive symptoms, whereas witnessing community violence was associated with anxiety-type symptoms and deliberate self-harm. However, when experiences of household dysfunction were entered into the models predicting anxiety, depression, and deliberate self-harm, none of the main effects remained significant. This suggests internalizing type symptoms are mediated by immediate family-level experiences. By contrast, significant community and collective violence effects on externalizing behaviours: younger initiation of alcohol use, younger initiation of drug use, and getting into trouble with the police persisted after controlling for family-level factors and thus are directly associated with growing up with the community and collective violence. Given the cross-sectional nature of our study, we cannot comment on the direction of the effect. However, post-hoc correlational analyses revealed associations between externalising behaviours and personal factors, including greater risk-taking and young age at puberty. The implications of the findings will be discussed in relation to interventions for young people and families living with the community and collective violence.Keywords: community and collective violence, adverse childhood experiences, youth, psychological wellbeing
Procedia PDF Downloads 83660 A Study on the Chemical Composition of Kolkheti's Sphagnum Peat Peloids to Evaluate the Perspective of Use in Medical Practice
Authors: Al. Tsertsvadze. L. Ebralidze, I. Matchutadze. D. Berashvili, A. Bakuridze
Abstract:
Peatlands are landscape elements, they are formed over a very long period by physical, chemical, biologic, and geologic processes. In the moderate zone of Caucasus, the Kolkheti lowlands are distinguished by the diversity of relictual plants, a high degree of endemism, orographic, climate, landscape, and other characteristics of high levels of biodiversity. The unique properties of the Kolkheti region lead to the formation of special, so-called, endemic peat peloids. The composition and properties of peloids strongly depend on peat-forming plants. Peat is considered a unique complex of raw materials, which can be used in different fields of the industry: agriculture, metallurgy, energy, biotechnology, chemical industry, health care. They are formed in permanent wetland areas. As a result of decay, higher plants remain in the anaerobic area, with the participation of microorganisms. Peat mass absorbs soil and groundwater. Peloids are predominantly rich with humic substances, which are characterized by high biological activity. Humic acids stimulate enzymatic activity, regenerative processes, and have anti-inflammatory activity. Objects of the research were Kolkheti peat peloids (Ispani, Anaklia, Churia, Chirukhi, Peranga) possessing different formation phases. Due to specific physical and chemical properties of research objects, the aim of the research was to develop analytical methods in order to study the chemical composition of the objects. The research was held using modern instrumental methods of analysis: Ultraviolet-visible spectroscopy and Infrared spectroscopy, Scanning Electron Microscopy, Centrifuge, dry oven, Ultraturax, pH meter, fluorescence spectrometer, Gas chromatography-mass spectrometry (GC-MS/MS), Gas chromatography. Based on the research ration between organic and inorganic substances, the spectrum of micro and macro elements, also the content of minerals was determined. The content of organic nitrogen was determined using the Kjeldahl method. The total composition of amino acids was studied by a spectrophotometric method using standard solutions of glutamic and aspartic acids. Fatty acid was determined using GC (Gas chromatography). Based on the obtained results, we can conclude that the method is valid to identify fatty acids in the research objects. The content of organic substances in the research objects was held using GC-MS. Using modern instrumental methods of analysis, the chemical composition of research objects was studied. Each research object is predominantly reached with a broad spectrum of organic (fatty acids, amino acids, carbocyclic and heterocyclic compounds, organic acids and their esters, steroids) and inorganic (micro and macro elements, minerals) substances. Modified methods used in the presented research may be utilized for the evaluation of cosmetological balneological and pharmaceutical means prepared on the base of Kolkheti's Sphagnum Peat Peloids.Keywords: modern analytical methods, natural resources, peat, chemistry
Procedia PDF Downloads 125659 Neuroprotection against N-Methyl-D-Aspartate-Induced Optic Nerve and Retinal Degeneration Changes by Philanthotoxin-343 to Alleviate Visual Impairments Involve Reduced Nitrosative Stress
Authors: Izuddin Fahmy Abu, Mohamad Haiqal Nizar Mohamad, Muhammad Fattah Fazel, Renu Agarwal, Igor Iezhitsa, Nor Salmah Bakar, Henrik Franzyk, Ian Mellor
Abstract:
Glaucoma is the global leading cause of irreversible blindness. Currently, the available treatment strategy only involves lowering intraocular pressure (IOP); however, the condition often progresses despite lowered or normal IOP in some patients. N-methyl-D-aspartate receptor (NMDAR) excitotoxicity often occurs in neurodegeneration-related glaucoma; thus it is a relevant target to develop a therapy based on neuroprotection approach. This study investigated the effects of Philanthotoxin-343 (PhTX-343), an NMDAR antagonist, on the neuroprotection of NMDA-induced glaucoma to alleviate visual impairments. Male Sprague-Dawley rats were equally divided: Groups 1 (control) and 2 (glaucoma) were intravitreally injected with phosphate buffer saline (PBS) and NMDA (160nM), respectively, while group 3 was pre-treated with PhTX-343 (160nM) 24 hours prior to NMDA injection. Seven days post-treatments, rats were subjected to visual behavior assessments and subsequently euthanized to harvest their retina and optic nerve tissues for histological analysis and determination of nitrosative stress level using 3-nitrotyrosine ELISA. Visual behavior assessments via open field, object, and color recognition tests demonstrated poor visual performance in glaucoma rats indicated by high exploratory behavior. PhTX-343 pre-treatment appeared to preserve visual abilities as all test results were significantly improved (p < 0.05). H&E staining of the retina showed a marked reduction of ganglion cell layer thickness in the glaucoma group; in contrast, PhTX-343 significantly increased the number by 1.28-folds (p < 0.05). PhTX-343 also increased the number of cell nuclei/100μm2 within inner retina by 1.82-folds compared to the glaucoma group (p < 0.05). Toluidine blue staining of optic nerve tissues showed that PhTX-343 reduced the degeneration changes compared to the glaucoma group which exhibited vacuolation overall sections. PhTX-343 also decreased retinal 3- nitrotyrosine concentration by 1.74-folds compared to the glaucoma group (p < 0.05). All results in PhTX-343 group were comparable to control (p > 0.05). We conclude that PhTX-343 protects against NMDA-induced changes and visual impairments in the rat model by reducing nitrosative stress levels.Keywords: excitotoxicity, glaucoma, nitrosative stress , NMDA receptor , N-methyl-D-aspartate , philanthotoxin, visual behaviour
Procedia PDF Downloads 135