Search results for: reduce intracranial adaptive capacity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10560

Search results for: reduce intracranial adaptive capacity

720 Poster for Sickle Cell Disease and Barriers to Care in South Yorkshire from 2017 to 2023

Authors: Amardass Dhami, Clare Samuelson

Abstract:

Background: Sickle cell disease (SCD) is a complex, multisystem condition that significantly impacts patients' quality of life, characterized by acute illness episodes, progressive organ damage, and reduced life expectancy. In the UK, over 13,000 individuals are affected, with South Yorkshire having the fifth highest prevalence, including approximately 800 patients. Retinal complications in SCD can manifest as either proliferative or non-proliferative disease, with proliferative changes being more prevalent. These retinal issues can cause significant morbidity, including visual loss and increased care requirements, underscoring the need for regular monitoring. An integrated approach was applied to ensure timely interventions, ultimately enhancing patient outcomes and reduce ‘did not attend’ rates. Aim: To assess the factors which may influence attendance to Haematology and Ophthalmology Clinics with attention towards levels of deprivation towards non-attendance. Method : A retrospective study on 84 eligible patients, from the regional tertiary Centre for Sickle Cell Care (Sheffield Teaching Hospital) from 2017 to 2023. The study focused on the incidence of sickle cell eye disease, specifically examining the outcomes of patients who attended the combined haematology and ophthalmology clinics. Patients who did not attend either clinic were excluded from the analysis to ensure a clear understanding of the combined clinic's impact. This data was then compared with the United Kingdom’s Index of Multiple Deprivation (IMD) datasets to assess if inequalities of care affected this population. Results: The study concluded that the effectiveness of combining haematology and ophthalmology clinics was reduced following the intervention. The DNA rates increased to 40% for the haematology clinic. Additionally, a significant proportion of the cohort was classified as residing in areas of deprivation, suggesting a possible link between socioeconomic factors and non-attendance rates Conclusion: These findings underscore the challenges of integrating care for SCD patients, particularly in relation to socioeconomic barriers. Despite the intent to streamline care and improve patient outcomes, the increase in DNA rates points to the need for further investigation into the underlying causes of non-attendance. Addressing these issues, especially in deprived areas, could enhance the effectiveness of combined clinics and ensure that patients receive the necessary monitoring and interventions for their eye health and overall well-being. Future strategies may need to focus on improving accessibility, outreach, and support for patients to mitigate the impact of socioeconomic factors on healthcare attendance.

Keywords: south yorkshire, sickle cell anemia, deprivation, factors, haematology

Procedia PDF Downloads 13
719 Composition, Velocity, and Mass of Projectiles Generated from a Chain Shot Event

Authors: Eric Shannon, Mark J. McGuire, John P. Parmigiani

Abstract:

A hazard associated with the use of timber harvesters is chain shot. Harvester saw chain is subjected to large dynamic mechanical stresses which can cause it to fracture. The resulting open loop of saw chain can fracture a second time and create a projectile consisting of several saw-chain links referred to as a chain shot. Its high kinetic energy enables it to penetrate operator enclosures and be a significant hazard. Accurate data on projectile composition, mass, and speed are needed for the design of both operator enclosures resistant to projectile penetration and for saw chain resistant to fracture. The work presented here contributes to providing this data through the use of a test machine designed and built at Oregon State University. The machine’s enclosure is a standard shipping container. To safely contain any anticipated chain shot, the container was lined with both 9.5 mm AR500 steel plates and 50 mm high-density polyethylene (HDPE). During normal operation, projectiles are captured virtually undamaged in the HDPE enabling subsequent analysis. Standard harvester components are used for bar mounting and chain tensioning. Standard guide bars and saw chains are used. An electric motor with flywheel drives the system. Testing procedures follow ISO Standard 11837. Chain speed at break was approximately 45.5 m/s. Data was collected using both a 75 cm solid bar (Oregon 752HSFB149) and 90 cm solid bar (Oregon 902HSFB149). Saw chains used were 89 Drive Link .404”-18HX loops made from factory spools. Standard 16-tooth sprockets were used. Projectile speed was measured using both a high-speed camera and a chronograph. Both rotational and translational kinetic energy are calculated. For this study 50 chain shot events were executed. Results showed that projectiles consisted of a variety combinations of drive links, tie straps, and cutter links. Most common (occurring in 60% of the events) was a drive-link / tie-strap / drive-link combination having a mass of approximately 10.33 g. Projectile mass varied from a minimum of 2.99 g corresponding to a drive link only to a maximum of 18.91 g corresponding to a drive-link / tie-strap / drive-link / cutter-link / drive-link combination. Projectile translational speed was measured to be approximately 270 m/s and rotational speed of approximately 14000 r/s. The calculated translational and rotational kinetic energy magnitudes each average over 600 J. This study provides useful information for both timber harvester manufacturers and saw chain manufacturers to design products that reduce the hazards associated with timber harvesting.

Keywords: chain shot, timber harvesters, safety, testing

Procedia PDF Downloads 146
718 First-Trimester Screening of Preeclampsia in a Routine Care

Authors: Tamar Grdzelishvili, Zaza Sinauridze

Abstract:

Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.

Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein

Procedia PDF Downloads 77
717 Plastic Pollution: Analysis of the Current Legal Framework and Perspectives on Future Governance

Authors: Giorgia Carratta

Abstract:

Since the beginning of mass production, plastic items have been crucial in our daily lives. Thanks to their physical and chemical properties, plastic materials have proven almost irreplaceable in a number of economic sectors such as packaging, automotive, building and construction, textile, and many others. At the same time, the disruptive consequences of plastic pollution have been progressively brought to light in all environmental compartments. The overaccumulation of plastics in the environment, and its adverse effects on habitats, wildlife, and (most likely) human health, represents a call for action to decision-makers around the globe. From a regulatory perspective, plastic production is an unprecedented challenge at all levels of governance. At the international level, the design of new legal instruments, the amendment of existing ones, and the coordination among the several relevant policy areas requires considerable effort. Under the pressure of both increasing scientific evidence and a concerned public opinion, countries seem to slowly move towards the discussion of a new international ‘plastic treaty.’ However, whether, how, and with which scopes such instrument would be adopted is still to be seen. Additionally, governments are establishing regional-basedstrategies, prone to consider the specificities of the plastic issue in a certain geographical area. Thanks to the new Circular Economy Action Plan, approved in March 2020 by the European Commission, EU countries are slowly but steadily shifting to a carbon neutral, circular economy in the attempt to reduce the pressure on natural resources and, parallelly, facilitate sustainable economic growth. In this context, the EU Plastic Strategy is promising to change the way plastic is designed, produced, used, and treated after consumption. In fact, only in the EU27 Member States, almost 26 million tons of plastic waste are generated herein every year, whose 24,9% is still destined to landfill. Positive effects of the Strategy also include a more effective protection of our environment, especially the marine one, the reduction of greenhouse gas emissions, a reduced need for imported fossil energy sources, more sustainable production and consumption patterns. As promising as it may sound, the road ahead is still long. The need to implement these measures in domestic legislations makes their outcome difficult to predict at the moment. An analysis of the current international and European Union legal framework on plastic pollution, binding, and voluntary instruments included, could serve to detect ‘blind spots’ in the current governance as well as to facilitate the development of policy interventions along the plastic value chain, where it appears more needed.

Keywords: environmental law, European union, governance, plastic pollution, sustainability

Procedia PDF Downloads 107
716 Predictors for Success in Methadone Maintenance Treatment Clinic: 24 Years of Experience

Authors: Einat E. Peles, Shaul Schreiber, Miriam Adelson

Abstract:

Background: Since established more than 50 years ago, methadone maintenance treatment (MMT) is the most effective treatment for opioid addiction, a chronic relapsing brain disorder that became an epidemic in western societies. Treatment includes daily individual optimal medication methadone dose (a long acting mu opioid receptor full agonist), accompanied with psychosocial therapy. It is well established that the longer retention in treatment the better outcome and survival occur. It reduces the likelihood to infectious diseases and overdose death that associated with drug injecting, enhanced social rehabilitation and eliminate criminal activity, and lead to healthy productive life. Aim: To evaluate predictors for long term retention in treatment we analyzed our prospective follow up of a major MMT clinic affiliated to a big tertiary medical center. Population Methods: Between June 25, 1993, and June 24, 2016, all 889 patients ( ≥ 18y) who ever admitted to the clinic were prospectively followed-up until May 2017. Duration in treatment from the first admission until the patient quit treatment or until the end of follow-up (24 years) was taken for calculating cumulative retention in treatment using survival analyses (Kaplan Meier) with log-rank and Cox regression for multivariate analyses. Results: Of the 889 patients, 25.2% were females who admitted to treatment at younger age (35.0 ± 7.9 vs. 40.6 ± 9.8, p < .0005), but started opioid usage at same age (22.3 ± 6.9). In addition to opioid use, on admission to MMT 58.5% had positive urine for benzodiazepines, 25% to cocaine, 12.4% to cannabis and 6.9% to amphetamines. Hepatitis C antibody tested positive in 55%, and HIV in 7.8% of the patients and 40%. Of all patients, 75.7% stayed at least one year in treatment, and of them, 67.7% stopped opioid usage (based on urine tests), and a net reduction observed in all other substance abuse (proportion of those who stopped minus proportion of those who have started). Long term retention up to 24 years was 8.0 years (95% Confidence Interval (CI) 7.4-8.6). Predictors for longer retention in treatment (Cox regression) were being older on admission ( ≥ 30y) Odds Ratio (OR) =1.4 (CI 1.1-1.8), not abusing opioids after one year OR=1.8 (CI 1.5-2.1), not abusing benzodiazepine after one year OR=1.7 (CI 1.4-2.1) and treating with methadone dose ≥ 100mg/day OR =1.8 (CI 1.5-2.3). Conclusions: Treating and following patients over 24 years indicate success of two main outcomes, high rate of retention after one year (75.7%) and high proportion of opiate abuse cessation (67.7%). As expected, longer cumulative retention was associated with patients treated with high adequate methadone dose that successfully result in opioid cessation. Based on these findings, in order to reduce morbidity and mortality, we find the establishment of more MMT clinics within a general hospital, a most urgent necessity.

Keywords: methadone maintenance treatment, epidemic, opioids, retention

Procedia PDF Downloads 192
715 Upon Poly(2-Hydroxyethyl Methacrylate-Co-3, 9-Divinyl-2, 4, 8, 10-Tetraoxaspiro (5.5) Undecane) as Polymer Matrix Ensuring Intramolecular Strategies for Further Coupling Applications

Authors: Aurica P. Chiriac, Vera Balan, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Elena Stoleru, Loredana E. Nita, Iordana Neamtu, Alina Diaconu, Liliana Mititelu-Tartau

Abstract:

The interest for studying ‘smart’ materials is entirely justified and in this context were realized investigations on poly(2-hydroxyethylmethacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane), which is a macromolecular compound with sensibility at pH and temperature, gel formation capacity, binding properties, amphilicity, good oxidative and thermal stability. Physico-chemical characteristics in terms of the molecular weight, temperature-sensitive abilities and thermal stability, as well rheological, dielectric and spectroscopic properties were evaluated in correlation with further coupling capabilities. Differential scanning calorimetry investigation indicated Tg at 36.6 °C and a melting point at Tm=72.8°C, for the studied copolymer, and up to 200oC two exothermic processes (at 99.7°C and 148.8°C) were registered with losing weight of about 4 %, respective 19.27%, which indicate just processes of thermal decomposition (and not phenomena of thermal transition) owing to scission of the functional groups and breakage of the macromolecular chains. At the same time, the rheological studies (rotational tests) confirmed the non-Newtonian shear-thinning fluid behavior of the copolymer solution. The dielectric properties of the copolymer have been evaluated in order to investigate the relaxation processes and two relaxation processes under Tg value were registered and attributed to localized motions of polar groups from side chain macromolecules, or parts of them, without disturbing the main chains. According to literature and confirmed as well by our investigations, β-relaxation is assigned with the rotation of the ester side group and the γ-relaxation corresponds to the rotation of hydroxy- methyl side groups. The fluorescence spectroscopy confirmed the copolymer structure, the spiroacetal moiety getting an axial conformation, more stable, with lower energy, able for specific interactions with molecules from environment, phenomena underlined by different shapes of the emission spectra of the copolymer. Also, the copolymer was used as template for indomethacin incorporation as model drug, and the biocompatible character of the complex was confirmed. The release behavior of the bioactive compound was dependent by the copolymer matrix composition, the increasing of 3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane comonomer amount attenuating the drug release. At the same time, the in vivo studies did not show significant differences of leucocyte formula elements, GOT, GPT and LDH levels, nor immune parameters (OC, PC, and BC) between control mice group and groups treated just with copolymer samples, with or without drug, data attesting the biocompatibility of the polymer samples. The investigation of the physico-chemical characteristics of poly(2-hydrxyethyl methacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane) in terms of temperature-sensitive abilities, rheological and dielectrical properties, are bringing useful information for further specific use of this polymeric compound.

Keywords: bioapplications, dielectric and spectroscopic properties, dual sensitivity at pH and temperature, smart materials

Procedia PDF Downloads 282
714 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades

Authors: Chi Zhang, Hua-Peng Chen

Abstract:

With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.

Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.

Procedia PDF Downloads 413
713 Crash and Injury Characteristics of Riders in Motorcycle-Passenger Vehicle Crashes

Authors: Z. A. Ahmad Noor Syukri, A. J. Nawal Aswan, S. V. Wong

Abstract:

The motorcycle has become one of the most common type of vehicles used on the road, particularly in the Asia region, including Malaysia, due to its size-convenience and affordable price. This study focuses only on crashes involving motorcycles with passenger cars consisting 43 real world crashes obtained from in-depth crash investigation process from June 2016 till July 2017. The study collected and analyzed vehicle and site parameters obtained during crash investigation and injury information acquired from the patient-treating hospital. The investigation team, consisting of two personnel, is stationed at the Emergency Department of the treatment facility, and was dispatched to the crash scene once receiving notification of the related crashes. The injury information retrieved was coded according to the level of severity using the Abbreviated Injury Scale (AIS) and classified into different body regions. The data revealed that weekend crashes were significantly higher for the night time period and the crash occurrence was the highest during morning hours (commuting to work period) for weekdays. Bad weather conditions play a minimal effect towards the occurrence of motorcycle – passenger vehicle crashes and nearly 90% involved motorcycles with single riders. Riders up to 25 years old are heavily involved in crashes with passenger vehicles (60%), followed by 26-55 year age group with 35%. Male riders were dominant in each of the age segments. The majority of the crashes involved side impacts, followed by rear impacts and cars outnumbered the rest of the passenger vehicle types in terms of crash involvement with motorcycles. The investigation data also revealed that passenger vehicles were the most at-fault counterpart (62%) when involved in crashes with motorcycles and most of the crashes involved situations whereby both of the vehicles are travelling in the same direction and one of the vehicles is in a turning maneuver. More than 80% of the involved motorcycle riders had sustained yellow severity level during triage process. The study also found that nearly 30% of the riders sustained injuries to the lower extremities, while MAIS level 3 injuries were recorded for all body regions except for thorax region. The result showed that crashes in which the motorcycles were found to be at fault were more likely to occur during night and raining conditions. These types of crashes were also found to be more likely to involve other types of passenger vehicles rather than cars and possess higher likelihood in resulting higher ISS (>6) value to the involved rider. To reduce motorcycle fatalities, it first has to understand the characteristics concerned and focus may be given on crashes involving passenger vehicles as the most dominant crash partner on Malaysian roads.

Keywords: motorcycle crash, passenger vehicle, in-depth crash investigation, injury mechanism

Procedia PDF Downloads 322
712 Health Inequalities in the Global South: Identification of Poor People with Disabilities in Cambodia to Generate Access to Healthcare

Authors: Jamie Lee Harder

Abstract:

In the context of rapidly changing social and economic circumstances in the developing world, this paper analyses access to public healthcare for poor people with disabilities in Cambodia. Like other countries of South East Asia, Cambodia is developing at rapid pace. The historical past of Cambodia, however, has set former social policy structures to zero. This past forces Cambodia and its citizens to implement new public health policies to align with the needs of social care, healthcare, and urban planning. In this context, the role of people with disabilities (PwDs) is crucial as new developments should and can take into consideration their specific needs from the beginning onwards. This paper is based on qualitative research with expert interviews and focus group discussions in Cambodia. During the field work it became clear that the identification tool for the poorest households (HHs) does not count disability as a financial risk to fall into poverty neither when becoming sick nor because of higher health expenditures and/or lower income because of the disability. The social risk group of poor PwDs faces several barriers in accessing public healthcare. The urbanization, the socio-economic health status, and opportunities for education; all influence social status and have an impact on the health situation of these individuals. Cambodia has various difficulties with providing access to people with disabilities, mostly due to barriers regarding finances, geography, quality of care, poor knowledge about their rights and negative social and cultural beliefs. Shortened budgets and the lack of prioritizations lead to the need for reorientation of local communities, international and national non-governmental organizations and social policy. The poorest HHs are identified with a questionnaire, the IDPoor program, for which the Ministry of Planning is responsible. The identified HHs receive an ‘Equity Card’ which provides access free of charge to public healthcare centers and hospitals among other benefits. The dataset usually does not include information about the disability status. Four focus group discussions (FGD) with 28 participants showed various barriers in accessing public healthcare. These barriers go far beyond a missing ramp to access the healthcare center. The contents of the FGDs were ratified and repeated during the expert interviews with the local Ministries, NGOs, international organizations and private persons working in the field. The participants of the FGDs faced and continue to face high discrimination, low capacity to work and earn an own income, dependency on others and less social competence in their lives. When discussing their health situation, we identified, a huge difference between those who are identified and hold an Equity Card and those who do not. Participants reported high costs without IDPoor identification, positive experiences when going to the health center in terms of attitude and treatment, low satisfaction with specific capacities for treatments, negative rumors, and discrimination with the consequence of fear to seek treatment in many cases. The problem of accessing public healthcare by risk groups can be adapted to situations in other countries.

Keywords: access, disability, health, inequality, Cambodia

Procedia PDF Downloads 151
711 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 122
710 Kinematical Analysis of Tai Chi Chuan Players during Gait and Balance Test and Implication in Rehabilitation Exercise

Authors: Bijad Alqahtani, Graham Arnold, Weijie Wang

Abstract:

Background—Tai Chi Chuan (TCC) is a type of traditional Chinese martial art and is considered a benefiting physical fitness. Advanced techniques of motion analysis have been routinely used in the clinical assessment. However, so far, little research has been done on the biomechanical assessment of TCC players in terms of gait and balance using motion analysis. Objectives—The aim of this study was to investigate whether TCC improves the lower limb conditions and balance ability using the state of the art motion analysis technologies, i.e. motion capture system, electromyography and force platform. Methods—Twenty TCC (9 male, 11 female) with age between (42-77) years old and weight (56.2-119 Kg), and eighteen Non-TCC participants (7 male, 11 female), weight (50-110 Kg) with age (43- 78) years old at the matched age as a control group were recruited in this study. Their gait and balance were collected using Vicon Nexus® to obtain the gait parameters, and kinematic parameters of hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 5 trials of single-leg balance for the dominant side. Also, the participants performed 3 trials of four square step balance and 10 trials of walking. From the recorded trials, three good ones were analyzed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g. walking speed, cadence, stride length, and joint parameters, e.g. joint angle, force, moments, etc. Result— The temporal-spatial variables of TCC subjects were compared with the Non-TCC subjects, it was found that there was a significant difference (p < 0.05) between the groups. Moreover, it was observed that participants of TCC have significant differences in ankle, hip, and knee joints’ kinematics in the sagittal, coronal, and transverse planes such as ankle angle (19.90±19.54 deg) for TCC while (15.34±6.50 deg) for Non-TCC, and knee angle (14.96±6.40 deg) for TCC while (17.63±5.79 deg) for Non-TCC in the transverse plane. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g. maintaining single leg stance time in the TCC participants showed longer duration (20.85±10.53 s) in compared to Non-TCC people group (13.39±8.78 s). While the result showed that there was no significant difference between groups in the four square step balance. Conclusion—Our result showed that there are significant differences between Tai Chi Chuan and Non-Tai Chi Chuan participants in the various aspects of gait analysis and balance test, as a consequence of these findings some of biomechanical parameters such as joints kinematics, gait parameters and single leg stance balance test, the Tai Chi Chuan could improve the lower limb conditions and could reduce a risk of fall for the elderly with ageing.

Keywords: gait analysis, kinematics, single leg stance, Tai Chi Chuan

Procedia PDF Downloads 127
709 Nuclear Near Misses and Their Learning for Healthcare

Authors: Nick Woodier, Iain Moppett

Abstract:

Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.

Keywords: culture, definitions, near miss, nuclear safety, patient safety

Procedia PDF Downloads 104
708 Youth Health Promotion Project for Indigenous People in Canada: Together against Bullying and Cyber-Dependence

Authors: Mohamed El Fares Djellatou, Fracoise Filion

Abstract:

The Ashukin program that means bridge in Naskapi or Atikamekw language, has been designed to offer a partnership between nursing students and an indigenous community. The students design a health promotion project tailored to the needs of the community. The issues of intimidation in primary school and cyber-dependence in high school were some concerns in a rural Atikamekw community. The goal of the project was to have a conversation with indigenous youths, aged 10-16 years old, on the challenges presented by intimidation and cyber dependence as well as promoting healthy relationships online and within the community. Methods: Multiple progressive inquiry questions (PIQs) were used to assess the feasibility and importance of this project for the Atikamekw nation, and to determine a plan to follow. The theoretical foundations to guide the conception of the project were the Population Health Promotion Model (PHPM), the First Nations Holistic Lifelong Learning Model, and the Medicine Wheel. A broad array of social determinants of health were addressed, including healthy childhood development, personal health practices, and coping skills, and education. The youths were encouraged to participate in interactive educational sessions, using PowerPoint presentations and pamphlets as the main effective strategies. Additional tools such as cultural artworks and physical activities were introduced to strengthen the inter-relational and team spirit within the Indigenous population. A quality assurance tool (QAT) was developed specifically to determine the appropriateness of these health promotion tools. Improvements were guided by the feedback issued by the indigenous schools’ teachers and social workers who filled the QATs. Post educational sessions, quantitative results have shown that 93.48% of primary school students were able to identify the different types of intimidation, 72.65% recognized more than two strategies, and 52.1% were able to list at least four resources to diffuse intimidation. On the other hand, around 75% of the adolescents were able to name at least three negative effects, and 50% listed three strategies to reduce cyber-dependence. This project was meant to create a bridge with the First Nation through health promotion, a population that is known to be disadvantaged due to systemic health inequity and disparities. Culturally safe care was proposed to deal with the two identified priority issues, and an educational toolkit was given to both schools to ensure the sustainability of the project. The project was self-financed through fundraising activities, and it yielded better results than expected.

Keywords: indigenous, first nation, bullying, cyber-dependence, internet addiction, intimidation, youth, adolescents, school, community nursing, health promotion

Procedia PDF Downloads 98
707 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes

Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter

Abstract:

Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.

Keywords: dispersal, evidence, propeller, UAV

Procedia PDF Downloads 163
706 Biorefinery as Extension to Sugar Mills: Sustainability and Social Upliftment in the Green Economy

Authors: Asfaw Gezae Daful, Mohsen Alimandagari, Kathleen Haigh, Somayeh Farzad, Eugene Van Rensburg, Johann F. Görgens

Abstract:

The sugar industry has to 're-invent' itself to ensure long-term economic survival and opportunities for job creation and enhanced community-level impacts, given increasing pressure from fluctuating and low global sugar prices, increasing energy prices and sustainability demands. We propose biorefineries for re-vitalisation of the sugar industry using low value lignocellulosic biomass (sugarcane bagasse, leaves, and tops) annexed to existing sugar mills, producing a spectrum of high value platform chemicals along with biofuel, bioenergy, and electricity. Opportunity is presented for greener products, to mitigate climate change and overcome economic challenges. Xylose from labile hemicellulose remains largely underutilized and the conversion to value-add products a major challenge. Insight is required on pretreatment and/or extraction to optimize production of cellulosic ethanol together with lactic acid, furfural or biopolymers from sugarcane bagasse, leaves, and tops. Experimental conditions for alkaline and pressurized hot water extraction dilute acid and steam explosion pretreatment of sugarcane bagasse and harvest residues were investigated to serve as a basis for developing various process scenarios under a sugarcane biorefinery scheme. Dilute acid and steam explosion pretreatment were optimized for maximum hemicellulose recovery, combined sugar yield and solids digestibility. An optimal range of conditions for alkaline and liquid hot water extraction of hemicellulosic biopolymers, as well as conditions for acceptable enzymatic digestibility of the solid residue, after such extraction was established. Using data from the above, a series of energy efficient biorefinery scenarios are under development and modeled using Aspen Plus® software, to simulate potential factories to better understand the biorefinery processes and estimate the CAPEX and OPEX, environmental impacts, and overall viability. Rigorous and detailed sustainability assessment methodology was formulated to address all pillars of sustainability. This work is ongoing and to date, models have been developed for some of the processes which can ultimately be combined into biorefinery scenarios. This will allow systematic comparison of a series of biorefinery scenarios to assess the potential to reduce negative impacts on and maximize the benefits of social, economic, and environmental factors on a lifecycle basis.

Keywords: biomass, biorefinery, green economy, sustainability

Procedia PDF Downloads 514
705 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach

Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton

Abstract:

Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.

Keywords: competition, growth, model, thinning

Procedia PDF Downloads 128
704 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single

Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa

Abstract:

Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600  minimum load impedance of the DAQ card with the 5 to 20  low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.

Keywords: flux density, electrical steel, LabVIEW, magnetization

Procedia PDF Downloads 291
703 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures

Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny

Abstract:

Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.

Keywords: pyrolysis, torrefaction, biooil, lignin

Procedia PDF Downloads 328
702 Curcumin Nanomedicine: A Breakthrough Approach for Enhanced Lung Cancer Therapy

Authors: Shiva Shakori Poshteh

Abstract:

Lung cancer is a highly prevalent and devastating disease, representing a significant global health concern with profound implications for healthcare systems and society. Its high incidence, mortality rates, and late-stage diagnosis contribute to its formidable nature. To address these challenges, nanoparticle-based drug delivery has emerged as a promising therapeutic strategy. Curcumin (CUR), a natural compound derived from turmeric, has garnered attention as a potential nanomedicine for lung cancer treatment. Nanoparticle formulations of CUR offer several advantages, including improved drug delivery efficiency, enhanced stability, controlled release kinetics, and targeted delivery to lung cancer cells. CUR exhibits a diverse array of effects on cancer cells. It induces apoptosis by upregulating pro-apoptotic proteins, such as Bax and Bak, and downregulating anti-apoptotic proteins, such as Bcl-2. Additionally, CUR inhibits cell proliferation by modulating key signaling pathways involved in cancer progression. It suppresses the PI3K/Akt pathway, crucial for cell survival and growth, and attenuates the mTOR pathway, which regulates protein synthesis and cell proliferation. CUR also interferes with the MAPK pathway, which controls cell proliferation and survival, and modulates the Wnt/β-catenin pathway, which plays a role in cell proliferation and tumor development. Moreover, CUR exhibits potent antioxidant activity, reducing oxidative stress and protecting cells from DNA damage. Utilizing CUR as a standalone treatment is limited by poor bioavailability, lack of targeting, and degradation susceptibility. Nanoparticle-based delivery systems can overcome these challenges. They enhance CUR’s bioavailability, protect it from degradation, and improve absorption. Further, Nanoparticles enable targeted delivery to lung cancer cells through surface modifications or ligand-based targeting, ensuring sustained release of CUR to prolong therapeutic effects, reduce administration frequency, and facilitate penetration through the tumor microenvironment, thereby enhancing CUR’s access to cancer cells. Thus, nanoparticle-based CUR delivery systems promise to improve lung cancer treatment outcomes. This article provides an overview of lung cancer, explores CUR nanoparticles as a treatment approach, discusses the benefits and challenges of nanoparticle-based drug delivery, and highlights prospects for CUR nanoparticles in lung cancer treatment. Future research aims to optimize these delivery systems for improved efficacy and patient prognosis in lung cancer.

Keywords: lung cancer, curcumin, nanomedicine, nanoparticle-based drug delivery

Procedia PDF Downloads 72
701 A Qualitative Exploration of the Beliefs and Experiences of HIV-Related Self-Stigma Amongst Young Adults Living with HIV in Zimbabwe

Authors: Camille Rich, Nadine Ferris France, Ann Nolan, Webster Mavhu, Vongai Munatsi

Abstract:

Background and Aim: Zimbabwe has one of the highest HIV rates in the world, with a 12.7% adult prevalence rate. Young adults are a key group affected by HIV, and one-third of all new infections in Zimbabwe are amongst people ages 18-24 years. Stigma remains one of the main barriers to managing and reducing the HIV crisis, especially for young adults. There are several types of stigma, including enacted stigma, the outward discrimination towards someone and self-stigma, the negative self-judgments one has towards themselves. Self-stigma can have severe consequences, including feelings of worthlessness, shame, suicidal thoughts, and avoidance of medical help. This can have detrimental effects on those living with HIV. However, the unique beliefs and impacts of self-stigma amongst key groups living with HIV have not yet been explored. Therefore, the focus of this study is on the beliefs and experiences of HIV-related self-stigma, as experienced by young adults living in Harare, Zimbabwe. Research Methods: A qualitative approach was taken for this study, using sixteen semi-structured interviews with young adults (18-24 years) who are living with HIV in Harare. Participants were conveniently and purposefully sampled as members of Africa, an organization dedicated to young people living with HIV. Interviews were conducted over Zoom due to the COVID-19 pandemic, recorded and then coded using the software NVivo. The data was analyzed using both inductive and deductive Thematic Analysis to find common themes. Results: All of the participants experienced HIV-related self-stigma, and both beliefs and experiences were explored. These negative self-perceptions included beliefs of worthlessness, hopelessness, and negative body image. The young adults described believing they were not good enough to be around HIV negative people or that they could never be loved due to their HIV status. Developing self-stigmatizing thoughts came from internalizing negative cultural values, stereotypes about people living with HIV, and adverse experiences. Three main themes of self-stigmatizing experiences emerged: disclosure difficulties, relationship complications, and being isolated. Fear of telling someone their status, rejection in a relationship, and being excluded by others due to their HIV status contributed to their self-stigma. These experiences caused feelings of loneliness, sadness, shame, fear, and low self-worth. Conclusions: This study explored the beliefs and experiences of HIV-related self-stigma of these young adults. The emergence of negative self-perceptions demonstrated deep-rooted beliefs of HIV-related self-stigma that adversely impact the participants. The negative self-perceptions and self-stigmatizing experiences caused the participants to feel worthless, hopeless, shameful, and alone-negatively impacting their physical and mental health, personal relationships, and sense of self-identity. These results can now be used to pursue interventions to target the specific beliefs and experiences of young adults living with HIV and reduce the adverse consequences of self-stigma.

Keywords: beliefs, HIV, self-stigma, stigma, Zimbabwe

Procedia PDF Downloads 115
700 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 144
699 Enhancing Algal Bacterial Photobioreactor Efficiency: Nutrient Removal and Cost Analysis Comparison for Light Source Optimization

Authors: Shahrukh Ahmad, Purnendu Bose

Abstract:

Algal-Bacterial photobioreactors (ABPBRs) have emerged as a promising technology for sustainable biomass production and wastewater treatment. Nutrient removal is seldom done in sewage treatment plants and large volumes of wastewater which still have nutrients are being discharged and that can lead to eutrophication. That is why ABPBR plays a vital role in wastewater treatment. However, improving the efficiency of ABPBR remains a significant challenge. This study aims to enhance ABPBR efficiency by focusing on two key aspects: nutrient removal and cost-effective optimization of the light source. By integrating nutrient removal and cost analysis for light source optimization, this study proposes practical strategies for improving ABPBR efficiency. To reduce organic carbon and convert ammonia to nitrates, domestic wastewater from a 130 MLD sewage treatment plant (STP) was aerated with a hydraulic retention time (HRT) of 2 days. The treated supernatant had an approximate nitrate and phosphate values of 16 ppm as N and 6 ppm as P, respectively. This supernatant was then fed into the ABPBR, and the removal of nutrients (nitrate as N and phosphate as P) was observed using different colored LED bulbs, namely white, blue, red, yellow, and green. The ABPBR operated with a 9-hour light and 3-hour dark cycle, using only one color of bulbs per cycle. The study found that the white LED bulb, with a photosynthetic photon flux density (PPFD) value of 82.61 µmol.m-2 .sec-1 , exhibited the highest removal efficiency. It achieved a removal rate of 91.56% for nitrate and 86.44% for phosphate, surpassing the other colored bulbs. Conversely, the green LED bulbs showed the lowest removal efficiencies, with 58.08% for nitrate and 47.48% for phosphate at an HRT of 5 days. The quantum PAR (Photosynthetic Active Radiation) meter measured the photosynthetic photon flux density for each colored bulb setting inside the photo chamber, confirming that white LED bulbs operated at a wider wavelength band than the others. Furthermore, a cost comparison was conducted for each colored bulb setting. The study revealed that the white LED bulb had the lowest average cost (Indian Rupee)/light intensity (µmol.m-2 .sec-1 ) value at 19.40, while the green LED bulbs had the highest average cost (INR)/light intensity (µmol.m-2 .sec-1 ) value at 115.11. Based on these comparative tests, it was concluded that the white LED bulbs were the most efficient and costeffective light source for an algal photobioreactor. They can be effectively utilized for nutrient removal from secondary treated wastewater which helps in improving the overall wastewater quality before it is discharged back into the environment.

Keywords: algal bacterial photobioreactor, domestic wastewater, nutrient removal, led bulbs

Procedia PDF Downloads 79
698 The Impacts of Export in Stimulating Economic Growth in Ethiopia: ARDL Model Analysis

Authors: Natnael Debalklie Teshome

Abstract:

The purpose of the study was to empirically investigate the impacts of export performance and its volatility on economic growth in the Ethiopian economy. To do so, time-series data of the sample period from 1974/75 – 2017/18 were collected from databases and annual reports of IMF, WB, NBE, MoFED, UNCTD, and EEA. The extended Cobb-Douglas production function of the neoclassical growth model framed under the endogenous growth theory was used to consider both the performance and instability aspects of export. First, the unit root test was conducted using ADF and PP tests, and data were found in stationery with a mix of I(0) and I(1). Then, the bound test and Wald test were employed, and results showed that there exists long-run co-integration among study variables. All the diagnostic test results also reveal that the model fulfills the criteria of the best-fitted model. Therefore, the ARDL model and VECM were applied to estimate the long-run and short-run parameters, while the Granger causality test was used to test the causality between study variables. The empirical findings of the study reveal that only export and coefficient of variation had significant positive and negative impacts on RGDP in the long run, respectively, while other variables were found to have an insignificant impact on the economic growth of Ethiopia. In the short run, except for gross capital formation and coefficients of variation, which have a highly significant positive impact, all other variables have a strongly significant negative impact on RGDP. This shows exports had a strong, significant impact in both the short-run and long-run periods. However, its positive and statistically significant impact is observed only in the long run. Similarly, there was a highly significant export fluctuation in both periods, while significant commodity concentration (CCI) was observed only in the short run. Moreover, the Granger causality test reveals that unidirectional causality running from export performance to RGDP exists in the long run and from both export and RGDP to CCI in the short run. Therefore, the export-led growth strategy should be sustained and strengthened. In addition, boosting the industrial sector is vital to bring structural transformation. Hence, the government has to give different incentive schemes and supportive measures to exporters to extract the spillover effects of exports. Greater emphasis on price-oriented diversification and specialization on major primary products that the country has a comparative advantage should also be given to reduce value-based instability in the export earnings of the country. The government should also strive to increase capital formation and human capital development via enhancing investments in technology and quality of education to accelerate the economic growth of the country.

Keywords: export, economic growth, export diversification, instability, co-integration, granger causality, Ethiopian economy

Procedia PDF Downloads 77
697 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí

Abstract:

A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.

Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding

Procedia PDF Downloads 96
696 Use of Cassava Waste and Its Energy Potential

Authors: I. Inuaeyen, L. Phil, O. Eni

Abstract:

Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.

Keywords: bio-refinery, cassava waste, energy, process modelling

Procedia PDF Downloads 373
695 Flexible Ethylene-Propylene Copolymer Nanofibers Decorated with Ag Nanoparticles as Effective 3D Surface-Enhanced Raman Scattering Substrates

Authors: Yi Li, Rui Lu, Lianjun Wang

Abstract:

With the rapid development of chemical industry, the consumption of volatile organic compounds (VOCs) has increased extensively. In the process of VOCs production and application, plenty of them have been transferred to environment. As a result, it has led to pollution problems not only in soil and ground water but also to human beings. Thus, it is important to develop a sensitive and cost-effective analytical method for trace VOCs detection in environment. Surface-enhanced Raman Spectroscopy (SERS), as one of the most sensitive optical analytical technique with rapid response, pinpoint accuracy and noninvasive detection, has been widely used for ultratrace analysis. Based on the plasmon resonance on the nanoscale metallic surface, SERS technology can even detect single molecule due to abundant nanogaps (i.e. 'hot spots') on the nanosubstrate. In this work, a self-supported flexible silver nitrate (AgNO3)/ethylene-propylene copolymer (EPM) hybrid nanofibers was fabricated by electrospinning. After an in-situ chemical reduction using ice-cold sodium borohydride as reduction agent, numerous silver nanoparticles were formed on the nanofiber surface. By adjusting the reduction time and AgNO3 content, the morphology and dimension of silver nanoparticles could be controlled. According to the principles of solid-phase extraction, the hydrophobic substance is more likely to partition into the hydrophobic EPM membrane in an aqueous environment while water and other polar components are excluded from the analytes. By the enrichment of EPM fibers, the number of hydrophobic molecules located on the 'hot spots' generated from criss-crossed nanofibers is greatly increased, which further enhances SERS signal intensity. The as-prepared Ag/EPM hybrid nanofibers were first employed to detect common SERS probe molecule (p-aminothiophenol) with the detection limit down to 10-12 M, which demonstrated an excellent SERS performance. To further study the application of the fabricated substrate for monitoring hydrophobic substance in water, several typical VOCs, such as benzene, toluene and p-xylene, were selected as model compounds. The results showed that the characteristic peaks of these target analytes in the mixed aqueous solution could be distinguished even at a concentration of 10-6 M after multi-peaks gaussian fitting process, including C-H bending (850 cm-1), C-C ring stretching (1581 cm-1, 1600 cm-1) of benzene, C-H bending (844 cm-1 ,1151 cm-1), C-C ring stretching (1001 cm-1), CH3 bending vibration (1377 cm-1) of toluene, C-H bending (829 cm-1), C-C stretching (1614 cm-1) of p-xylene. The SERS substrate has remarkable advantages which combine the enrichment capacity from EPM and the Raman enhancement of Ag nanoparticles. Meanwhile, the huge specific surface area resulted from electrospinning is benificial to increase the number of adsoption sites and promotes 'hot spots' formation. In summary, this work provides powerful potential in rapid, on-site and accurate detection of trace VOCs using a portable Raman.

Keywords: electrospinning, ethylene-propylene copolymer, silver nanoparticles, SERS, VOCs

Procedia PDF Downloads 160
694 Challenging Convections: Rethinking Literature Review Beyond Citations

Authors: Hassan Younis

Abstract:

Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.

Keywords: supply chain management, sustainability, framework, model

Procedia PDF Downloads 52
693 Mangroves in the Douala Area, Cameroon: The Challenges of Open Access Resources for Forest Governance

Authors: Bissonnette Jean-François, Dossa Fabrice

Abstract:

The project focuses on analyzing the spatial and temporal evolution of mangrove forest ecosystems near the city of Douala, Cameroon, in response to increasing human and environmental pressures. The selected study area, located in the Wouri River estuary, has a unique combination of economic importance, and ecological prominence. The study included valuable insights by conducting semi-structured interviews with resource operators and local officials. The thorough analysis of socio-economic data, farmer surveys, and satellite-derived information was carried out utilizing quantitative approaches in Excel and SPSS. Simultaneously, qualitative data was subjected to rigorous classification and correlation with other sources. The use of ArcGIS and CorelDraw facilitated the visual representation of the gradual changes seen in various land cover classifications. The research reveals complex processes that characterize mangrove ecosystems on Manoka and Cape Cameroon Islands. The lack of regulations in urbanization and the continuous growth of infrastructure have led to a significant increase in land conversion, causing negative impacts on natural landscapes and forests. The repeated instances of flooding and coastal erosion have further shaped landscape alterations, fostering the proliferation of water and mudflat areas. The unregulated use of mangrove resources is a significant factor in the degradation of these ecosystems. Activities including the use of wood for smoking and fishing, together with the coastal pollution resulting from the absence of waste collection, have had a significant influence. In addition, forest operators contribute to the degradation of vegetation, hence exacerbating the harmful impact of invasive species on the ecosystem. Strategic interventions are necessary to guarantee the sustainable management of these ecosystems. The proposals include advocating for sustainable wood exploitation techniques, using appropriate techniques, along with regeneration, and enforcing rules to prevent wood overexploitation. By implementing these measures, the ecological balance can be preserved, safeguarding the long-term viability of these precious ecosystems. On a conceptual level, this paper uses the framework developed by Elinor Ostrom and her colleagues to investigate the consequences of open access resources, where local actors have not been able to enforce measures to prevent overexploitation of mangrove wood resources. Governmental authorities have demonstrated limited capacity to enforce sustainable management of wood resources and have not been able to establish effective relationships with local fishing communities and with communities involved in the purchase of wood. As a result, wood resources in the mangrove areas remain largely accessible, while authorities do not monitor wood volumes extracted nor methods of exploitation. There have only been limited and punctual attempts at forest restoration with no significant consequence on mangrove forests dynamics.

Keywords: Mangroves, forest management, governance, open access resources, Cameroon

Procedia PDF Downloads 63
692 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden

Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson

Abstract:

Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.

Keywords: ehealth, elderly, evaluation, intervention, multi-cultural

Procedia PDF Downloads 324
691 Bituminous Geomembranes: Sustainable Products for Road Construction and Maintenance

Authors: Ines Antunes, Andrea Massari, Concetta Bartucca

Abstract:

Greenhouse gasses (GHG) role in the atmosphere has been well known since the 19th century; however, researchers have begun to relate them to climate changes only in the second half of the following century. From this moment, scientists started to correlate the presence of GHG such as CO₂ with the global warming phenomena. This has raised the awareness not only of those who were experts in this field but also of public opinion, which is becoming more and more sensitive to environmental pollution and sustainability issues. Nowadays the reduction of GHG emissions is one of the principal objectives of EU nations. The target is an 80% reduction of emissions in 2050 and to reach the important goal of carbon neutrality. Road sector is responsible for an important amount of those emissions (about 20%). The most part is due to traffic, but a good contribution is also given directly or indirectly from road construction and maintenance. Raw material choice and reuse of post-consumer plastic rather than a cleverer design of roads have an important contribution to reducing carbon footprint. Bituminous membranes can be successfully used as reinforcement systems in asphalt layers to improve road pavement performance against cracking. Composite materials coupling membranes with grids and/or fabrics should be able to combine improved tensile properties of the reinforcement with stress absorbing and waterproofing effects of membranes. Polyglass, with its brand dedicated to road construction and maintenance called Polystrada, has done more than this. The company's target was not only to focus sustainability on the final application but also to implement a greener mentality from the cradle to the grave. Starting from production, Polyglass has made important improvements finalized to increase efficiency and minimize waste. The installation of a trigeneration plant and the usage of selected production scraps inside the products as well as the reduction of emissions into the environment, are one of the main efforts of the company to reduce impact during final product build-up. Moreover, the benefit given by installing Polystrada products brings a significant improvement in road lifetime. This has an impact not only on the number of maintenance or renewal that needs to be done (build less) but also on traffic density due to works and road deviation in case of operations. During the end of the life of a road, Polystrada products can be 100% recycled and milled with classical systems used without changing the normal maintenance procedures. In this work, all these contributions were quantified in terms of CO₂ emission thanks to an LCA analysis. The data obtained were compared with a classical system or a standard production of a membrane. What it is possible to see is that the usage of Polyglass products for street maintenance and building gives a significant reduction of emissions in case of membrane installation under the road wearing course.

Keywords: CO₂ emission, LCA, maintenance, sustainability

Procedia PDF Downloads 65