Search results for: systolic blood pressure; percent body fat and vital capacity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15279

Search results for: systolic blood pressure; percent body fat and vital capacity

129 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 138
128 The Stability of Vegetable-Based Synbiotic Drink during Storage

Authors: Camelia Vizireanu, Daniela Istrati, Alina Georgiana Profir, Rodica Mihaela Dinica

Abstract:

Globally, there is a great interest in promoting the consumption of fruit and vegetables to improve health. Due to the content of essential compounds such as antioxidants, important amounts of fruits and vegetables should be included in the daily diet. Juices are good sources of vitamins and can also help increase overall fruit and vegetable consumption. Starting from this trend (introduction into the daily diet of vegetables and fruits) as well as the desire to diversify the range of functional products for both adults and children, a fermented juice was made using probiotic microorganisms based on root vegetables, with potential beneficial effects in the diet of children, vegetarians and people with lactose intolerance. The three vegetables selected for this study, red beet, carrot, and celery bring a significant contribution to functional compounds such as carotenoids, flavonoids, betalain, vitamin B and C, minerals and fiber. By fermentation, the functional value of the vegetable juice increases due to the improved stability of these compounds. The combination of probiotic microorganisms and vegetable fibers resulted in a nutrient-rich synbiotic product. The stability of the nutritional and sensory qualities of the obtained synbiotic product has been tested throughout its shelf life. The evaluation of the physico-chemical changes of the synbiotic drink during storage confirmed that: (i) vegetable juice enriched with honey and vegetable pulp is an important source of nutritional compounds, especially carbohydrates and fiber; (ii) microwave treatment used to inhibit pathogenic microflora did not significantly affect nutritional compounds in vegetable juice, vitamin C concentration remained at baseline and beta-carotene concentration increased due to increased bioavailability; (iii) fermentation has improved the nutritional quality of vegetable juice by increasing the content of B vitamins, polyphenols and flavonoids and has a good antioxidant capacity throughout the shelf life; (iv) the FTIR and Raman spectra have highlighted the results obtained using physicochemical methods. Based on the analysis of IR absorption frequencies, the most striking bands belong to the frequencies 3330 cm⁻¹, 1636 cm⁻¹ and 1050 cm⁻¹, specific for groups of compounds such as polyphenols, carbohydrates, fatty acids, and proteins. Statistical data processing revealed a good correlation between the content of flavonoids, betalain, β-carotene, ascorbic acid and polyphenols, the fermented juice having a stable antioxidant activity. Also, principal components analysis showed that there was a negative correlation between the evolution of the concentration of B vitamins and antioxidant activity. Acknowledgment: This study has been founded by the Francophone University Agency, Project Réseau régional dans le domaine de la santé, la nutrition et la sécurité alimentaire (SaIN), No. at Dunarea de Jos University of Galati 21899/ 06.09.2017 and by the Sectorial Operational Programme Human Resources Development of the Romanian Ministry of Education, Research, Youth and Sports trough the Financial Agreement POSDRU/159/1.5/S/132397 ExcelDOC.

Keywords: bioactive compounds, fermentation, synbiotic drink from vegetables, stability during storage

Procedia PDF Downloads 141
127 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation

Authors: A. Zapata, Orlando Arroyo, R. Bonett

Abstract:

Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.

Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm

Procedia PDF Downloads 59
126 Keratin Reconstruction: Evaluation of Green Peptides Technology on Hair Performance

Authors: R. Di Lorenzo, S. Laneri, A. Sacchi

Abstract:

Hair surface properties affect hair texture and shine, whereas the healthy state of the hair cortex sways hair ends. Even if cosmetic treatments are intrinsically safe, there is potentially damaging action on the hair fibers. Loss of luster, frizz, split ends, and other hair problems are particularly prevalent among people who repeatedly alter the natural style of their hair or among people with intrinsically weak hair. Technological and scientific innovations in hair care thus become invaluable allies to preserve their natural well-being and shine. The study evaluated restoring keratin-like ingredients that improve hair fibers' structural integrity, increase tensile strength, improve hair manageability and moisturizing. The hair shaft is composed of 65 - 95% of keratin. It gives the hair resistance, elasticity, and plastic properties and also contributes to their waterproofing. Providing exogenous keratin is, therefore, a practical approach to protect and nourish the hair. By analyzing the amino acid composition of keratin, we find a high frequency of hydrophobic amino acids. It confirms the critical role interactions, mainly hydrophobic, between cosmetic products and hair. The active ingredient analyzed comes from vegetable proteins through an enzymatic cut process that selected only oligo- and polypeptides (> 3500 KDa) rich in amino acids with hydrocarbon side chains apolar or sulfur. These chemical components are the most expressed amino acids at the level of the capillary keratin structure, and it determines the most significant possible compatibility with the target substrate. Given the biological variability of the sources, it isn't easy to define a constant and reproducible molecular formula of the product. Still, it consists of hydroxypropiltrimonium vegetable peptides with keratin-like performances. 20 natural hair tresses (30 cm in length and 0.50 g weight) were treated with the investigated products (5 % v/v aqueous solution) following a specific protocol and compared with non-treated (Control) and benchmark-keratin-treated strands (Benchmark). Their brightness, moisture content, cortical and surface integrity, and tensile strength were evaluated and statistically compared. Keratin-like treated hair tresses showed better results than the other two groups (Control and Benchmark). The product improves the surface with significant regularization of the cuticle closure, improves the cortex and the peri-medullar area filling, gives a highly organized and tidy structure, delivers a significant amount of sulfur on the hair, and is more efficient moisturization and imbibition power, increases hair brightness. The hydroxypropyltrimonium quaternized group added to the C-terminal end interacts with the negative charges that form on the hair after washing when disheveled and tangled. The interactions anchor the product to the hair surface, keeping the cuticles adhered to the shaft. The small size allows the peptides to penetrate and give body to the hair, together with a conditioning effect that gives an image of healthy hair. Results suggest that the product is a valid ally in numerous restructuring/conditioning, shaft protection, straightener/dryer-damage prevention hair care product.

Keywords: conditioning, hair damage, hair, keratin, polarized light microscopy, scanning electron microscope, thermogravimetric analysis

Procedia PDF Downloads 112
125 Unique Interprofessional Mental Health Education Model: A Pre/Post Survey

Authors: Michele L. Tilstra, Tiffany J. Peets

Abstract:

Interprofessional collaboration in behavioral healthcare education is increasingly recognized for its value in training students to address diverse client needs. While interprofessional education (IPE) is well-documented in occupational therapy education to address physical health, limited research exists on collaboration with counselors to address mental health concerns and the psychosocial needs of individuals receiving care. Counseling education literature primarily examines the collaboration of counseling students with psychiatrists, psychologists, social workers, and marriage and family therapists. This pretest/posttest survey research study explored changes in attitudes toward interprofessional teams among 56 Master of Occupational Therapy (MOT) (n = 42) and Counseling and Human Development (CHD) (n = 14) students participating in the Counselors and Occupational Therapists Professionally Engaged in the Community (COPE) program. The COPE program was designed to strengthen the behavioral health workforce in high-need and high-demand areas. Students accepted into the COPE program were divided into small MOT/CHD groups to complete multiple interprofessional multicultural learning modules using videos, case studies, and online discussion board posts. The online modules encouraged reflection on various behavioral healthcare roles, benefits of team-based care, cultural humility, current mental health challenges, personal biases, power imbalances, and advocacy for underserved populations. Using the Student Perceptions of Interprofessional Clinical Education- Revision 2 (SPICE-R2) scale, students completed pretest and posttest surveys using a 5-point Likert scale (Strongly Agree = 5 to Strongly Disagree = 1) to evaluate their attitudes toward interprofessional teamwork and collaboration. The SPICE-R2 measured three different factors: interprofessional teamwork and team-based practice (Team), roles/responsibilities for collaborative practice (Roles), and patient outcomes from collaborative practice (Outcomes). The mean total scores for all students improved from 4.25 (pretest) to 4.43 (posttest), Team from 4.66 to 4.58, Roles from 3.88 to 4.30, and Outcomes from 4.08 to 4.36. A paired t-test analysis for the total mean scores resulted in a t-statistic of 2.54, which exceeded both one-tail and two-tail critical values, indicating statistical significance (p = .001). When the factors of the SPICE-R2 were analyzed separately, only the Roles (t Stat=4.08, p =.0001) and Outcomes (t Stat=3.13, p = .002) were statistically significant. The item ‘I understand the roles of other health professionals’ showed the most improvement from a mean score for all students of 3.76 (pretest) to 4.46 (posttest). The significant improvement in students' attitudes toward interprofessional teams suggests that the unique integration of OT and CHD students in the COPE program effectively develops a better understanding of the collaborative roles necessary for holistic client care. These results support the importance of IPE through structured, engaging interprofessional experiences. These experiences are essential for enhancing students' readiness for collaborative practice and align with accreditation standards requiring interprofessional education in OT and CHD programs to prepare practitioners for team-based care. The findings contribute to the growing body of evidence supporting the integration of IPE in behavioral healthcare curricula to improve holistic client care and encourage students to engage in collaborative practice across healthcare settings.

Keywords: behavioral healthcare, counseling education, interprofessional education, mental health education, occupational therapy education

Procedia PDF Downloads 24
124 'Sextually' Active: Teens, 'Sexting' and Gendered Double Standards in the Digital Age

Authors: Annalise Weckesser, Alex Wade, Clara Joergensen, Jerome Turner

Abstract:

Introduction: Digital mobile technologies afford Generation M a number of opportunities in terms of communication, creativity and connectivity in their social interactions. Yet these young people’s use of such technologies is often the source of moral panic with accordant social anxiety especially prevalent in media representations of teen ‘sexting,’ or the sending of sexually explicit images via smartphones. Thus far, most responses to youth sexting have largely been ineffective or unjust with adult authorities sometimes blaming victims of non-consensual sexting, using child pornography laws to paradoxically criminalise those they are designed to protect, and/or advising teenagers to simply abstain from the practice. Prevention strategies are further skewed, with sex education initiatives often targeted at girls, implying that they shoulder the responsibility of minimising the risks associated with sexting (e.g. revenge porn and sexual predation). Purpose of Study: Despite increasing public interest and concern about ‘teen sexting,’ there remains a dearth of research with young people regarding their experiences of navigating sex and relationships in the current digital media landscape. Furthermore, young people's views on sexting are rarely solicited in the policy and educational strategies aimed at them. To address this research-policy-education gap, an interdisciplinary team of four researchers (from anthropology, media, sociology and education) have undertaken a peer-to-peer research project to co-create a sexual health intervention. Methods: In the winter of 2015-2016, the research team conducted serial group interviews with four cohorts of students (aged 13 to 15) from a secondary school in the West Midlands, UK. To facilitate open dialogue, girls and boys were interviewed separately, and each group consisted of no more than four pupils. The team employed a range of participatory techniques to elicit young people’s views on sexting, its consequences, and its interventions. A final focus group session was conducted with all 14 male and female participants to explore developing a peer-to-peer ‘safe sexting’ education intervention. Findings: This presentation will highlight the ongoing, ‘old school’ sexual double standards at work within this new digital frontier. In the sharing of ‘nudes’ (teens’ preferred term to ‘sexting’) via social media apps (e.g. Snapchat and WhatsApp), girls felt sharing images was inherently risky and feared being blamed and ‘slut-shamed.’ In contrast, boys were seen to gain in social status if they accumulated nudes of female peers. Further, if boys had nudes of themselves shared without consent, they felt they were expected to simply ‘tough it out.’ The presentation will also explore what forms of supports teens desire to help them in their day-to-day navigation of these digitally mediated, heteronormative performances of teen femininity and masculinity expected of them. Conclusion: This is the first research project, within UK, conducted with rather than about teens and the phenomenon of sexting. It marks a timely and important contribution to the nascent, but growing body of knowledge on gender, sexual politics and the digital mobility of sexual images created by and circulated amongst young people.

Keywords: teens, sexting, gender, sexual politics

Procedia PDF Downloads 217
123 Economic Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gas, fossil fuel power plants

Procedia PDF Downloads 383
122 Magnesium Nanoparticles for Photothermal Therapy

Authors: E. Locatelli, I. Monaco, R. C. Martin, Y. Li, R. Pini, M. Chiariello, M. Comes Franchini

Abstract:

Despite the many advantages of application of nanomaterials in the field of nanomedicine, increasing concerns have been expressed on their potential adverse effects on human health. There is urgency for novel green strategies toward novel materials with enhanced biocompatibility using safe reagents. Photothermal ablation therapy, which exploits localized heat increase of a few degrees to kill cancer cells, has appeared recently as a non-invasive and highly efficient therapy against various cancer types; anyway new agents able to generate hyperthermia when irradiated are needed and must have precise biocompatibility in order to avoid damage to healthy tissues and prevent toxicity. Recently, there has been increasing interest in magnesium as a biomaterial: it is the fourth most abundant cation in the human body, and it is essential for human metabolism. However magnesium nanoparticles (Mg NPs) have had limited diffusion due to the high reduction potential of magnesium cations, which makes NPs synthesis challenging. Herein, we report the synthesis of Mg NPs and their surface functionalization for the obtainment of a stable and biocompatible nanomaterial suitable for photothermal ablation therapy against cancer. We synthesized the Mg crystals by reducing MgCl2 with metallic lithium and exploiting naphthalene as an electron carrier: the lithium–naphthalene complex acts as the real reducing agent. Firstly, the nanocrystal particles were coated with the ligand 12-ethoxy ester dodecanehydroxamic acid, and then entrapped into water-dispersible polymeric micelles (PMs) made of the FDA-approved PLGA-b-PEG-COOH copolymer using the oil-in-water emulsion technique. Lately, we developed a more straightforward methodology by introducing chitosan, a highly biocompatible natural product, at the beginning of the process, simultaneously using lithium–naphthalene complex, thus having a one-pot procedure for the formation and surface modification of MgNPs. The obtained MgNPs were purified and fully characterized, showing diameters in the range of 50-300 nm. Notably, when coated with chitosan the particles remained stable as dry powder for more than 10 months. We proved the possibility of generating a temperature rise of a few to several degrees once MgNPs were illuminated using a 810 nm diode laser operating in continuous wave mode: the temperature rise resulted significant (0-15 °C) and concentration dependent. We then investigated potential cytotoxicity of the MgNPs: we used HN13 epithelial cells, derived from a head and neck squamous cell carcinoma and the hepa1-6 cell line, derived from hepatocellular carcinoma and very low toxicity was observed for both nanosystems. Finally, in vivo photothermal therapy was performed on xenograft hepa1-6 tumor bearing mice: the animals were treated with MgNPs coated with chitosan and showed no sign of suffering after the injection. After 12 hours the tumor was exposed to near-infrared laser light. The results clearly showed an extensive damage to tumor tissue after only 2 minutes of laser irradiation at 3Wcm-1, while no damage was reported when the tumor was treated with the laser and saline alone in control group. Despite the lower photothermal efficiency of Mg with respect to Au NPs, we consider MgNPs a promising, safe and green candidate for future clinical translations.

Keywords: chitosan, magnesium nanoparticles, nanomedicine, photothermal therapy

Procedia PDF Downloads 259
121 Intensification of Wet Air Oxidation of Landfill Leachate Reverse Osmosis Concentrates

Authors: Emilie Gout, Mathias Monnot, Olivier Boutin, Pierre Vanloot, Philippe Moulin

Abstract:

Water is a precious resource. Treating industrial wastewater remains a considerable technical challenge of our century. The effluent considered for this study is landfill leachate treated by reverse osmosis (RO). Nowadays, in most developed countries, sanitary landfilling is the main method to deal with municipal solid waste. Rainwater percolates through solid waste, generating leachates mostly comprised of organic and inorganic matter. Whilst leachate ages, its composition varies, becoming more and more bio-refractory. RO is already used for landfill leachates as it generates good quality permeate. However, its mains drawback is the production of highly polluted concentrates that cannot be discharged in the environment or reused, which is an important industrial issue. It is against this background that the study of coupling RO with wet air oxidation (WAO) was set to intensify and optimize processes to meet current regulations for water discharge in the environment. WAO is widely studied for effluents containing bio-refractory compounds. Oxidation consists of a destruction reaction capable of mineralizing the recalcitrant organic fraction of pollution into carbon dioxide and water when complete. WAO process in subcritical conditions requires a high-energy consumption, but it can be autothermic in a certain range of chemical oxygen demand (COD) concentrations (10-100 g.L⁻¹). Appropriate COD concentrations are reached in landfill leachate RO concentrates. Therefore, the purpose of this work is to report the performances of mineralization during WAO on RO concentrates. The coupling of RO/WAO has shown promising results in previous works on both synthetic and real effluents in terms of organic carbon (TOC) reduction by WAO and retention by RO. Non-catalytic WAO with air as oxidizer was performed in a lab-scale stirred autoclave (1 L) on landfill leachates RO concentrates collected in different seasons in a sanitary landfill in southern France. The yield of WAO depends on operating parameters such as total pressure, temperature, and time. Compositions of the effluent are also important aspects for process intensification. An experimental design methodology was used to minimize the number of experiments whilst finding the operating conditions achieving the best pollution reduction. The simulation led to a set of 18 experiments, and the responses to highlight process efficiency are pH, conductivity, turbidity, COD, TOC, and inorganic carbon. A 70% oxygen excess was chosen for all the experiments. First experiments showed that COD and TOC abatements of at least 70% were obtained after 90 min at 300°C and 20 MPa, which attested the possibility to treat RO leachate concentrates with WAO. In order to meet French regulations and validate process intensification with industrial effluents, some continuous experiments in a bubble column are foreseen, and some further analyses will be performed, such as biological oxygen demand and study of gas composition. Meanwhile, other industrial effluents are treated to compare RO-WAO performances. These effluents, coming from pharmaceutical, petrochemical, and tertiary wastewater industries, present different specific pollutants that will provide a better comprehension of the hybrid process and prove the intensification and feasibility of the process at an industrial scale. Acknowledgments: This work has been supported by the French National Research Agency (ANR) for the Project TEMPO under the reference number ANR-19-CE04-0002-01.

Keywords: hybrid process, landfill leachates, process intensification, reverse osmosis, wet air oxidation

Procedia PDF Downloads 125
120 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool

Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad

Abstract:

In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.

Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling

Procedia PDF Downloads 255
119 Electroactive Ferrocenyl Dendrimers as Transducers for Fabrication of Label-Free Electrochemical Immunosensor

Authors: Sudeshna Chandra, Christian Gäbler, Christian Schliebe, Heinrich Lang

Abstract:

Highly branched dendrimers provide structural homogeneity, controlled composition, comparable size to biomolecules, internal porosity and multiple functional groups for conjugating reactions. Electro-active dendrimers containing multiple redox units have generated great interest in their use as electrode modifiers for development of biosensors. The electron transfer between the redox-active dendrimers and the biomolecules play a key role in developing a biosensor. Ferrocenes have multiple and electrochemically equivalent redox units that can act as electron “pool” in a system. The ferrocenyl-terminated polyamidoamine dendrimer is capable of transferring multiple numbers of electrons under the same applied potential. Therefore, they can be used for dual purposes: one in building a film over the electrode for immunosensors and the other for immobilizing biomolecules for sensing. Electrochemical immunosensor, thus developed, exhibit fast and sensitive analysis, inexpensive and involve no prior sample pre-treatment. Electrochemical amperometric immunosensors are even more promising because they can achieve a very low detection limit with high sensitivity. Detection of the cancer biomarkers at an early stage can provide crucial information for foundational research of life science, clinical diagnosis and prevention of disease. Elevated concentration of biomarkers in body fluid is an early indication of some type of cancerous disease and among all the biomarkers, IgG is the most common and extensively used clinical cancer biomarkers. We present an IgG (=immunoglobulin) electrochemical immunosensor using a newly synthesized redox-active ferrocenyl dendrimer of generation 2 (G2Fc) as glassy carbon electrode material for immobilizing the antibody. The electrochemical performance of the modified electrodes was assessed in both aqueous and non-aqueous media using varying scan rates to elucidate the reaction mechanism. The potential shift was found to be higher in an aqueous electrolyte due to presence of more H-bond which reduced the electrostatic attraction within the amido groups of the dendrimers. The cyclic voltammetric studies of the G2Fc-modified GCE in 0.1 M PBS solution of pH 7.2 showed a pair of well-defined redox peaks. The peak current decreased significantly with the immobilization of the anti-goat IgG. After the immunosensor is blocked with BSA, a further decrease in the peak current was observed due to the attachment of the protein BSA to the immunosensor. A significant decrease in the current signal of the BSA/anti-IgG/G2Fc/GCE was observed upon immobilizing IgG which may be due to the formation of immune-conjugates that blocks the tunneling of mass and electron transfer. The current signal was found to be directly related to the amount of IgG captured on the electrode surface. With increase in the concentration of IgG, there is a formation of an increasing amount of immune-conjugates that decreased the peak current. The incubation time and concentration of the antibody was optimized for better analytical performance of the immunosensor. The developed amperometric immunosensor is sensitive to IgG concentration as low as 2 ng/mL. Tailoring of redox-active dendrimers provides enhanced electroactivity to the system and enlarges the sensor surface for binding the antibodies. It may be assumed that both electron transfer and diffusion contribute to the signal transformation between the dendrimers and the antibody.

Keywords: ferrocenyl dendrimers, electrochemical immunosensors, immunoglobulin, amperometry

Procedia PDF Downloads 326
118 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.

Keywords: catalyst, coal, liquefaction, temperature-staged

Procedia PDF Downloads 637
117 Review of Urbanization Pattern in Kabul City

Authors: Muhammad Hanif Amiri, Edris Sadeqy, Ahmad Freed Osman

Abstract:

International Conference on Architectural Engineering and Skyscraper (ICAES 2016) on January 18 - 19, 2016 is aimed to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration. Therefore, we are very keen to participate and share our issues in order to get valuable feedbacks of the conference participants. Urbanization is a controversial issue all around the world. Substandard and unplanned urbanization has many implications on a social, cultural and economic situation of population life. Unplanned and illegal construction has become a critical issue in Afghanistan particularly Kabul city. In addition, lack of municipal bylaws, poor municipal governance, lack of development policies and strategies, budget limitation, low professional capacity of ainvolved private sector in development and poor coordination among stakeholders are the other factors which made the problem more complicated. The main purpose of this research paper is to review urbanization pattern of Kabul city and find out the improvement solutions and to evaluate the increasing of population density which caused vast illegal and unplanned development which finally converts the Kabul city to a slam area as the whole. The Kabul city Master Plan was reviewed in the year 1978 and revised for the planned 2million population. In 2001, the interim administration took place and the city became influx of returnees from neighbor countries and other provinces of Afghanistan mostly for the purpose of employment opportunities, security and better quality of life, therefore, Kabul faced with strange population growth. According to Central Statistics Organization of Afghanistan population of Kabul has been estimated approx. 5 million (2015), however a new Master Plan has been prepared in 2009, but the existing challenges have not been dissolved yet. On the other hand, 70% of Kabul population is living in unplanned (slam) area and facing the shortage of drinking water, inexistence of sewerage and drainage network, inexistence of proper management system for solid waste collection, lack of public transportation and traffic management, environmental degradation and the shortage of social infrastructure. Although there are many problems in Kabul city, but still the development of 22 townships are in progress which caused the great attraction of population. The research is completed with a detailed analysis on four main issues such as elimination of duplicated administrations, Development of regions, Rehabilitation and improvement of infrastructure, and prevention of new townships establishment in Kabul Central Core in order to mitigate the problems and constraints which are the foundation and principal to find the point of departure for an objective based future development of Kabul city. The closure has been defined to reflect the stage-wise development in light of prepared policy and strategies, development of a procedure for the improvement of infrastructure, conducting a preliminary EIA, defining scope of stakeholder’s contribution and preparation of project list for initial development. In conclusion this paper will help the transformation of Kabul city.

Keywords: development of regions, illegal construction, population density, urbanization pattern

Procedia PDF Downloads 304
116 Understanding Natural Resources Governance in Canada: The Role of Institutions, Interests, and Ideas in Alberta's Oil Sands Policy

Authors: Justine Salam

Abstract:

As a federal state, Canada’s constitutional arrangements regarding the management of natural resources is unique because it gives complete ownership and control of natural resources to the provinces (subnational level). However, the province of Alberta—home to the third largest oil reserves in the world—lags behind comparable jurisdictions in levying royalties on oil corporations, especially oil sands royalties. While Albertans own the oil sands, scholars have argued that natural resource exploitation in Alberta benefits corporations and industry more than it does Albertans. This study provides a systematic understanding of the causal factors affecting royalties in Alberta to map dynamics of power and how they manifest themselves during policy-making. Mounting domestic and global public pressure led Alberta to review its oil sands royalties twice in less than a decade through public-commissioned Royalty Review Panels, first in 2007 and again in 2015. The Panels’ task was to research best practices and to provide policy recommendations to the Government through public consultations with Albertans, industry, non-governmental organizations, and First Nations peoples. Both times, the Panels recommended a relative increase to oil sands royalties. However, irrespective of the Reviews’ recommendations, neither the right-wing 2007 Progressive Conservative Party (PC) nor the left-wing 2015 New Democratic Party (NDP) government—both committed to increase oil sands royalties—increased royalty intake. Why did two consecutive political parties at opposite ends of the political spectrum fail to account for the recommendations put forward by the Panel? Through a qualitative case-study analysis, this study assesses domestic and global causal factors for Alberta’s inability to raise oil sands royalties significantly after the two Reviews through an institutions, interests, and ideas framework. Indeed, causal factors can be global (e.g. market and price fluctuation) or domestic (e.g. oil companies’ influence on the Alberta government). The institutions, interests, and ideas framework is at the intersection of public policy, comparative studies, and political economy literatures, and therefore draws multi-faceted insights into the analysis. To account for institutions, the study proposes to review international trade agreements documents such as the North American Free Trade Agreement (NAFTA) because they have embedded Alberta’s oil sands into American energy security policy and tied Canadian and Albertan oil policy in legal international nods. To account for interests, such as how the oil lobby or the environment lobby can penetrate governmental decision-making spheres, the study draws on the Oil Sands Oral History project, a database of interviews from government officials and oil industry leaders at a pivotal time in Alberta’s oil industry, 2011-2013. Finally, to account for ideas, such as how narratives of Canada as a global ‘energy superpower’ and the importance of ‘energy security’ have dominated and polarized public discourse, the study relies on content analysis of Alberta-based pro-industry newspapers to trace the prevalence of these narratives. By mapping systematically the nods and dynamics of power at play in Alberta, the study sheds light on the factors that influence royalty policy-making in one of the largest industries in Canada.

Keywords: Alberta Canada, natural resources governance, oil sands, political economy

Procedia PDF Downloads 119
115 Facies, Diagenetic Analysis and Sequence Stratigraphy of Habib Rahi Formation Dwelling in the Vicinity of Jacobabad Khairpur High, Southern Indus Basin, Pakistan

Authors: Muhammad Haris, Syed Kamran Ali, Mubeen Islam, Tariq Mehmood, Faisal Shah

Abstract:

Jacobabad Khairpur High, part of a Sukkur rift zone, is the separating boundary between Central and Southern Indus Basin, formed as a result of Post-Jurassic uplift after the deposition of Middle Jurassic Chiltan Formation. Habib Rahi Formation of Middle to Late Eocene outcrops in the vicinity of Jacobabad Khairpur High, a section at Rohri near Sukkur is measured in detail for lithofacies, microfacies, diagenetic analysis and sequence stratigraphy. Habib Rahi Formation is richly fossiliferous and consists of mostly limestone with subordinate clays and marl. The total thickness of the formation in this section is 28.8m. The bottom of the formation is not exposed, while the upper contact with the Sirki Shale of the Middle Eocene age is unconformable in some places. A section is measured using Jacob’s Staff method, and traverses were made perpendicular to the strike. Four different lithofacies were identified based on outcrop geology which includes coarse-grained limestone facies (HR-1 to HR-5), massive bedded limestone facies (HR-6 HR-7), and micritic limestone facies (HR-8 to HR-13) and algal dolomitic limestone facie (HR-14). Total 14 rock samples were collected from outcrop for detailed petrographic studies, and thin sections of respective samples were prepared and analyzed under the microscope. On the basis of Dunham’s (1962) classification systems after studying textures, grain size, and fossil content and using Folk’s (1959) classification system after reviewing Allochems type, four microfacies were identified. These microfacies include HR-MF 1: Benthonic Foraminiferal Wackstone/Biomicrite Microfacies, HR-MF 2: Foramineral Nummulites Wackstone-Packstone/Biomicrite Microfacies HR-MF 3: Benthonic Foraminiferal Packstone/Biomicrite Microfacies, HR-MF 4: Bioclasts Carbonate Mudstone/Micrite Microfacies. The abundance of larger benthic Foraminifera’s (LBF), including Assilina sp., A. spiral abrade, A. granulosa, A. dandotica, A. laminosa, Nummulite sp., N. fabiani, N. stratus, N. globulus, Textularia, Bioclasts, and Red algae indicates shallow marine (Tidal Flat) environment of deposition. Based on variations in rock types, grain size, and marina fauna Habib Rahi Formation shows progradational stacking patterns, which indicates coarsening upward cycles. The second order of sea-level rise is identified (spanning from Y-Persian to Bartonian age) that represents the Transgressive System Tract (TST) and a third-order Regressive System Tract (RST) (spanning from Bartonian to Priabonian age). Diagenetic processes include fossils replacement by mud, dolomitization, pressure dissolution associated stylolites features and filling with dark organic matter. The presence of the microfossils includes Nummulite. striatus, N. fabiani, and Assilina. dandotica, signify Bartonian to Priabonian age of Habib Rahi Formation.

Keywords: Jacobabad Khairpur High, Habib Rahi Formation, lithofacies, microfacies, sequence stratigraphy, diagenetic history

Procedia PDF Downloads 447
114 Converting Urban Organic Waste into Aquaculture Feeds: A Two-Step Bioconversion Approach

Authors: Aditi Chitharanjan Parmar, Marco Gottardo, Giulia Adele Tuci, Francesco Valentino

Abstract:

The generation of urban organic waste is a significant environmental problem due to the potential release of leachate and/or methane into the environment. This contributes to climate change, discharging a valuable resource that could be used in various ways. This research addresses this issue by proposing a two-step approach by linking biowaste management to aquaculture industry via single cell proteins (SCP) production. A mixture of food waste and municipal sewage sludge (FW-MSS) was firstly subjected to a mesophilic (37°C) anaerobic fermentation to produce a liquid stream rich in short-chain fatty acids (SCFAs), which are important building blocks for the following microbial biomass growth. In the frame of stable fermentation activity (after 1 week of operation), the average value of SCFAs was 21.3  0.4 g COD/L, with a CODSCFA/CODSOL ratio of 0.77 COD/COD. This indicated the successful strategy to accumulate SCFAs from the biowaste mixture by applying short hydraulic retention time (HRT; 4 days) and medium organic loading rate (OLR; 7 – 12 g VS/L d) in the lab-scale (V = 4 L) continuous stirred tank reactor (CSTR). The SCFA-rich effluent was then utilized as feedstock for the growth of a mixed microbial consortium able to store polyhydroxyalkanoates (PHA), a class of biopolymers completely biodegradable in nature and produced as intracellular carbon/energy source. Given the demonstrated properties of the intracellular PHA as antimicrobial and immunomodulatory effect on various fish species, the PHA-producing culture was intended to be utilized as SCP in aquaculture. The growth of PHA-storing biomass was obtained in a 2-L sequencing batch reactor (SBR), fully aerobic and set at 25°C; to stimulate a certain storage response (PHA production) in the cells, the feast-famine conditions were adopted, consisting in an alternation of cycles during which the biomass was exposed to an initial abundance of substrate (feast phase) followed by a starvation period (famine phase). To avoid the proliferation of other bacteria not able to store PHA, the SBR was maintained at low HRT (2 days). Along the stable growth of the mixed microbial consortium (the growth yield was estimated to be 0.47 COD/COD), the feast-famine strategy enhanced the PHA production capacity, leading to a final PHA content in the biomass equal to 16.5 wt%, which is suitable for the use as SCP. In fact, by incorporating the waste-derived PHA-rich biomass into fish feed at 20 wt%, the final feed could contain a PHA content around 3.0 wt%, within the recommended range (0.2–5.0 wt%) for promoting fish health. Proximate analysis of the PHA-rich biomass revealed a good crude proteins level (around 51 wt%) and the presence of all the essential amino acids (EAA), together accounting for 31% of the SCP total amino acid composition. This suggested that the waste-derived SCP was a source of good quality proteins with a good nutritional value. This approach offers a sustainable solution for urban waste management, potentially establishing a sustainable waste-to-value conversion route by connecting waste management to the growing aquaculture and fish feed production sectors.

Keywords: feed supplement, nutritional value, polyhydroxyalkanoates (PHA), single cell protein (SCP), urban organic waste.

Procedia PDF Downloads 21
113 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 387
112 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 378
111 National Accreditation Board for Hospitals and Healthcare Reaccreditation, the Challenges and Advantages: A Qualitative Case Study

Authors: Narottam Puri, Gurvinder Kaur

Abstract:

Background: The National Accreditation Board for Hospitals & Healthcare Providers (NABH) is India’s apex standard setting accrediting body in health care which evaluates and accredits healthcare organizations. NABH requires accredited organizations to become reaccredited every three years. It is often though that once the initial accreditation is complete, the foundation is set and reaccreditation is a much simpler process. Fortis Hospital, Shalimar Bagh, a part of the Fortis Healthcare group is a 262 bed, multi-specialty tertiary care hospital. The hospital was successfully accredited in the year 2012. On completion of its first cycle, the hospital underwent a reaccreditation assessment in the year 2015. This paper aims to gain a better understanding of the challenges that accredited hospitals face when preparing for a renewal of their accreditations. Methods: The study was conducted using a cross-sectional mixed methods approach; semi-structured interviews were conducted with senior leadership team and staff members including doctors and nurses. Documents collated by the QA team while preparing for the re-assessment like the data on quality indicators: the method of collection, analysis, trending, continual incremental improvements made over time, minutes of the meetings, amendments made to the existing policies and new policies drafted was reviewed to understand the challenges. Results: The senior leadership had a concern about the cost of accreditation and its impact on the quality of health care services considering the staff effort and time consumed it. The management was however in favor of continuing with the accreditation since it offered competitive advantage, strengthened community confidence besides better pay rates from the payors. The clinicians regarded it as an increased non-clinical workload. Doctors felt accountable within a professional framework, to themselves, the patient and family, their peers and to their profession; but not to accreditation bodies and raised concerns on how the quality indicators were measured. The departmental leaders had a positive perception of accreditation. They agreed that it ensured high standards of care and improved management of their functional areas. However, they were reluctant in sparing people for the QA activities due to staffing issues. With staff turnover, a lot of work was lost as sticky knowledge and had to be redone. Listing the continual quality improvement initiatives over the last 3 years was a challenge in itself. Conclusion: The success of any quality assurance reaccreditation program depends almost entirely on the commitment and interest of the administrators, nurses, paramedical staff, and clinicians. The leader of the Quality Movement is critical in propelling and building momentum. Leaders need to recognize skepticism and resistance and consider ways in which staff can become positively engaged. Involvement of all the functional owners is the start point towards building ownership and accountability for standards compliance. Creativity plays a very valuable role. Communication by Mail Series, WhatsApp groups, Quizzes, Events, and any and every form helps. Leaders must be able to generate interest and commitment without burdening clinical and administrative staff with an activity they neither understand nor believe in.

Keywords: NABH, reaccreditation, quality assurance, quality indicators

Procedia PDF Downloads 212
110 Targeting Violent Extremist Narratives: Applying Network Targeting Techniques to the Communication Functions of Terrorist Groups

Authors: John Hardy

Abstract:

Over the last decade, the increasing utility of extremist narratives to the operational effectiveness of terrorist organizations has been evidenced by the proliferation of inspired or affiliated attacks across the world. Famous examples such as regional al-Qaeda affiliates and the self-styled “Islamic State” demonstrate the effectiveness of leveraging communication technologies to disseminate propaganda, recruit members, and orchestrate attacks. Terrorist organizations with the capacity to harness the communicative power offered by digital communication technologies and effective political narratives have held an advantage over their targets in recent years. Terrorists have leveraged the perceived legitimacy of grass-roots actors to appeal to a global audience of potential supporters and enemies alike, and have wielded a proficiency in profile-raising which remains unmatched by counter terrorism narratives around the world. In contrast, many attempts at propagating official counter-narratives have been received by target audiences as illegitimate, top-down and impersonally bureaucratic. However, the benefits provided by widespread communication and extremist narratives have come at an operational cost. Terrorist organizations now face a significant challenge in protecting their access to communications technologies and authority over the content they create and endorse. The dissemination of effective narratives has emerged as a core function of terrorist organizations with international reach via inspired or affiliated attacks. As such, it has become a critical function which can be targeted by intelligence and security forces. This study applies network targeting principles which have been used by coalition forces against a range of non-state actors in the Middle East and South Asia to the communicative function of terrorist organizations. This illustrates both a conceptual link between functional targeting and operational disruption in the abstract and a tangible impact on the operational effectiveness of terrorists by degrading communicative ability and legitimacy. Two case studies highlight the utility of applying functional targeting against terrorist organizations. The first case is the targeted killing of Anwar al-Awlaki, an al-Qaeda propagandist who crafted a permissive narrative and effective propaganda videos to attract recruits who committed inspired terrorist attacks in the US and overseas. The second is a series of operations against Islamic State propagandists in Syria, including the capture or deaths of a cadre of high profile Islamic State members, including Junaid Hussain, Abu Mohammad al-Adnani, Neil Prakash, and Rachid Kassim. The group of Islamic State propagandists were linked to a significant rise in affiliated and enabled terrorist attacks and were subsequently targeted by law enforcement and military agencies. In both cases, the disruption of communication between the terrorist organization and recruits degraded both communicative and operational functions. Effective functional targeting on member recruitment and operational tempo suggests that narratives are a critical function which can be leveraged against terrorist organizations. Further application of network targeting methods to terrorist narratives may enhance the efficacy of a range of counter terrorism techniques employed by security and intelligence agencies.

Keywords: countering violent extremism, counter terrorism, intelligence, terrorism, violent extremism

Procedia PDF Downloads 281
109 Physical Aspects of Shape Memory and Reversibility in Shape Memory Alloys

Authors: Osman Adiguzel

Abstract:

Shape memory alloys take place in a class of smart materials by exhibiting a peculiar property called the shape memory effect. This property is characterized by the recoverability of two certain shapes of material at different temperatures. These materials are often called smart materials due to their functionality and their capacity of responding to changes in the environment. Shape memory materials are used as shape memory devices in many interdisciplinary fields such as medicine, bioengineering, metallurgy, building industry and many engineering fields. The shape memory effect is performed thermally by heating and cooling after first cooling and stressing treatments, and this behavior is called thermoelasticity. This effect is based on martensitic transformations characterized by changes in the crystal structure of the material. The shape memory effect is the result of successive thermally and stress-induced martensitic transformations. Shape memory alloys exhibit thermoelasticity and superelasticity by means of deformation in the low-temperature product phase and high-temperature parent phase region, respectively. Superelasticity is performed by stressing and releasing the material in the parent phase region. Loading and unloading paths are different in the stress-strain diagram, and the cycling loop reveals energy dissipation. The strain energy is stored after releasing, and these alloys are mainly used as deformation absorbent materials in control of civil structures subjected to seismic events, due to the absorbance of strain energy during any disaster or earthquake. Thermal-induced martensitic transformation occurs thermally on cooling, along with lattice twinning with cooperative movements of atoms by means of lattice invariant shears, and ordered parent phase structures turn into twinned martensite structures, and twinned structures turn into the detwinned structures by means of stress-induced martensitic transformation by stressing the material in the martensitic condition. Thermal induced transformation occurs with the cooperative movements of atoms in two opposite directions, <110 > -type directions on the {110} - type planes of austenite matrix which is the basal plane of martensite. Copper-based alloys exhibit this property in the metastable β-phase region, which has bcc-based structures at high-temperature parent phase field. Lattice invariant shear and twinning is not uniform in copper-based ternary alloys and gives rise to the formation of complex layered structures, depending on the stacking sequences on the close-packed planes of the ordered parent phase lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper-based CuAlMn and CuZnAl alloys. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit superlattice reflections inherited from the parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that diffraction angles and intensities of diffraction peaks change with the aging duration at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close to each other. This result refers to the rearrangement of atoms in a diffusive manner.

Keywords: shape memory effect, martensitic transformation, reversibility, superelasticity, twinning, detwinning

Procedia PDF Downloads 172
108 Investigating Links in Achievement and Deprivation (ILiAD): A Case Study Approach to Community Differences

Authors: Ruth Leitch, Joanne Hughes

Abstract:

This paper presents the findings of a three-year government-funded study (ILiAD) that aimed to understand the reasons for differential educational achievement within and between socially and economically deprived areas in Northern Ireland. Previous international studies have concluded that there is a positive correlation between deprivation and underachievement. Our preliminary secondary data analysis suggested that the factors involved in educational achievement within multiple deprived areas may be more complex than this, with some areas of high multiple deprivation having high levels of student attainment, whereas other less deprived areas demonstrated much lower levels of student attainment, as measured by outcomes on high stakes national tests. The study proposed that no single explanation or disparate set of explanations could easily account for the linkage between levels of deprivation and patterns of educational achievement. Using a social capital perspective that centralizes the connections within and between individuals and social networks in a community as a valuable resource for educational achievement, the ILiAD study involved a multi-level case study analysis of seven community sites in Northern Ireland, selected on the basis of religious composition (housing areas are largely segregated by religious affiliation), measures of multiple deprivation and differentials in educational achievement. The case study approach involved three (interconnecting) levels of qualitative data collection and analysis - what we have termed Micro (or community/grassroots level) understandings, Meso (or school level) explanations and Macro (or policy/structural) factors. The analysis combines a statistical mapping of factors with qualitative, in-depth data interpretation which, together, allow for deeper understandings of the dynamics and contributory factors within and between the case study sites. Thematic analysis of the qualitative data reveals both cross-cutting factors (e.g. demographic shifts and loss of community, place of the school in the community, parental capacity) and analytic case studies of explanatory factors associated with each of the community sites also permit a comparative element. Issues arising from the qualitative analysis are classified either as drivers or inhibitors of educational achievement within and between communities. Key issues that are emerging as inhibitors/drivers to attainment include: the legacy of the community conflict in Northern Ireland, not least in terms of inter-generational stress, related with substance abuse and mental health issues; differing discourses on notions of ‘community’ and ‘achievement’ within/between community sites; inter-agency and intra-agency levels of collaboration and joined-up working; relationship between the home/school/community triad and; school leadership and school ethos. At this stage, the balance of these factors can be conceptualized in terms of bonding social capital (or lack of it) within families, within schools, within each community, within agencies and also bridging social capital between the home/school/community, between different communities and between key statutory and voluntary organisations. The presentation will outline the study rationale, its methodology, present some cross-cutting findings and use an illustrative case study of the findings from a community site to underscore the importance of attending to community differences when trying to engage in research to understand and improve educational attainment for all.

Keywords: educational achievement, multiple deprivation, community case studies, social capital

Procedia PDF Downloads 366
107 Mitigating Urban Flooding through Spatial Planning Interventions: A Case of Bhopal City

Authors: Rama Umesh Pandey, Jyoti Yadav

Abstract:

Flooding is one of the waterborne disasters that causes extensive destruction in urban areas. Developing countries are at a higher risk of such damage and more than half of the global flooding events take place in Asian countries including India. Urban flooding is more of a human-induced disaster rather than natural. This is highly influenced by the anthropogenic factors, besides metrological and hydrological causes. Unplanned urbanization and poor management of cities enhance the impact manifold and cause huge loss of life and property in urban areas. It is an irony that urban areas have been facing water scarcity in summers and flooding during monsoon. This paper is an attempt to highlight the factors responsible for flooding in a city especially from an urban planning perspective and to suggest mitigating measures through spatial planning interventions. Analysis has been done in two stages; first is to assess the impacts of previous flooding events and second to analyze the factors responsible for flooding at macro and micro level in cities. Bhopal, a city in Central India having nearly two million population, has been selected for the study. The city has been experiencing flooding during heavy rains in monsoon. The factors responsible for urban flooding were identified through literature review as well as various case studies from different cities across the world and India. The factors thus identified were analyzed for both macro and micro level influences. For macro level, the previous flooding events that have caused huge destructions were analyzed and the most affected areas in Bhopal city were identified. Since the identified area was falling within the catchment of a drain so the catchment area was delineated for the study. The factors analyzed were: rainfall pattern to calculate the return period using Weibull’s formula; imperviousness through mapping in ArcGIS; runoff discharge by using Rational method. The catchment was divided into micro watersheds and the micro watershed having maximum impervious surfaces was selected to analyze the coverage and effect of physical infrastructure such as: storm water management; sewerage system; solid waste management practices. The area was further analyzed to assess the extent of violation of ‘building byelaws’ and ‘development control regulations’ and encroachment over the natural water streams. Through analysis, the study has revealed that the main issues have been: lack of sewerage system; inadequate storm water drains; inefficient solid waste management in the study area; violation of building byelaws through extending building structures ether on to the drain or on the road; encroachments by slum dwellers along or on to the drain reducing the width and capacity of the drain. Other factors include faulty culvert’s design resulting in back water effect. Roads are at higher level than the plinth of houses which creates submersion of their ground floors. The study recommends spatial planning interventions for mitigating urban flooding and strategies for management of excess rain water during monsoon season. Recommendations have also been made for efficient land use management to mitigate water logging in areas vulnerable to flooding.

Keywords: mitigating strategies, spatial planning interventions, urban flooding, violation of development control regulations

Procedia PDF Downloads 315
106 The Multiplier Effects of Intelligent Transport System to Nigerian Economy

Authors: Festus Okotie

Abstract:

Nigeria is the giant of Africa with great and diverse transport potentials yet to be fully tapped into and explored.it is the most populated nation in Africa with nearly 200 million people, the sixth largest oil producer overall and largest oil producer in Africa with proven oil and gas reserves of 37 billion barrels and 192 trillion cubic feet, over 300 square kilometers of arable land and significant deposits of largely untapped minerals. A world bank indicator which measures trading across border ranked Nigeria at 183 out of 185 countries in 2017 and although different governments in the past made efforts through different interventions such as 2007 ports reforms led by Ngozi Okonjo-Iweala, a former minister of Finance and world bank managing director also attempted to resolve some of the challenges such as infrastructure shortcomings, policy and regulatory inconsistencies, overlapping functions and duplicated roles among the different MDA’S. It is one of the fundamental structures smart nations and cities are using to improve the living conditions of its citizens and achieving sustainability. Examples of some of its benefits includes tracking high pedestrian areas, traffic patterns, railway stations, planning and scheduling bus times, it also enhances interoperability, creates alerts of transport situation and has swift capacity to share information among the different platforms and transport modes. It also offers a comprehensive approach to risk management, putting emergency procedures and response capabilities in place, identifying dangers, including vandalism or violence, fare evasion, and medical emergencies. The Nigerian transport system is urgently in need of modern infrastructures such as ITS. Smart city transport technology helps cities to function productively, while improving services for businesses and lives of is citizens. This technology has the ability to improve travel across traditional modes of transport, such as cars and buses, with immediate benefits for city dwellers and also helps in managing transport systems such as dangerous weather conditions, heavy traffic, and unsafe speeds which can result in accidents and loss of lives. Intelligent transportation systems help in traffic control such as permitting traffic lights to react to changing traffic patterns, instead of working on a fixed schedule in traffic. Intelligent transportation systems is very important in Nigeria’s transportation sector and so would require trained personnel to drive its efficiency to greater height because the purpose of introducing it is to add value and at the same time reduce motor vehicle miles and traffic congestion which is a major challenge around Tin can island and Apapa Port, a major transportation hub in Nigeria. The need for the federal government, state governments, houses of assembly to organise a national transportation workshop to begin the process of addressing the challenges in our nation’s transport sector is highly expedient and so bills that will facilitate the implementation of policies to promote intelligent transportation systems needs to be sponsored because of its potentials to create thousands of jobs for our citizens, provide farmers with better access to cities and a better living condition for Nigerians.

Keywords: intelligent, transport, system, Nigeria

Procedia PDF Downloads 100
105 A Multimodal Discourse Analysis of Gender Representation on Health and Fitness Magazine Cover Pages

Authors: Nashwa Elyamany

Abstract:

In visual cultures, namely that of the United States, media representations are such influential and pervasive reflections of societal norms and expectations to the extent that they impact the manner in which both genders view themselves. Health and fitness magazines fall within the realm of visual culture. Since the main goal of communication is to ensure proper dissemination of information in order for the target audience to grasp the intended messages, it becomes imperative that magazine publishers, editors, advertisers and image producers use different modes of communication within their reach to convey messages to their readers and viewers. A rapid waxing flow of multimodality floods popular discourse, particularly health and fitness magazine cover pages. The use of well-crafted cover lines and visual images is imbued with agendas, consumerist ideologies and properties capable of effectively conveying implicit and explicit meaning to potential readers and viewers. In essence, the primary goal of this thesis is to interrogate the multi-semiotic operations and manifestations of hegemonic masculinity and femininity in male and female body culture, particularly on the cover pages of the twin American magazines Men's Health and Women's Health using corpora that spanned from 2011 to the mid of 2016. The researcher explores the semiotic resources that contribute to shaping and legitimizing a new form of postmodern, consumerist, gendered discourse that positions the reader-viewer ideologically. Methodologically, the researcher carries out analysis on the macro and micro levels. On the macro level, the researcher takes on a critical stance to illuminate the ideological nature of the multimodal ensemble of the cover pages, and, on the micro level, seeks to put forward new theoretical and methodological routes through which the semiotic choices well invested on the media texts can be more objectively scrutinized. On the macro level, a 'themes' analysis is initially conducted to isolate the overarching themes that dominate the fitness discourse on the cover pages under study. It is argued that variation in terms of frequencies of such themes is indicative, broadly speaking, of which facets of hegemonic masculinity and femininity are infused in the fitness discourse on the cover pages. On the micro level, this research work encompasses three sub-levels of analysis. The researcher follows an SF-MMDA approach, drawing on a trio of analytical frameworks: Halliday's SFG for the verbal analysis; Kress & van Leeuween's VG for the visual analysis; and CMT in relation to Sperber & Wilson's RT for the pragma-cognitive analysis of multimodal metaphors and metonymies. The data is presented in terms of detailed descriptions in conjunction with frequency tables, ANOVA with alpha=0.05 and MANOVA in the multiple phases of analysis. Insights and findings from this multi-faceted, social-semiotic analysis are interpreted in light of Cultivation Theory, Self-objectification Theory and the literature to date. Implications for future research include the implementation of a multi-dimensional approach whereby linguistic and visual analytical models are deployed with special regards to cultural variation.

Keywords: gender, hegemony, magazine cover page, multimodal discourse analysis, multimodal metaphor, multimodal metonymy, systemic functional grammar, visual grammar

Procedia PDF Downloads 327
104 Monte Carlo Risk Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gases, power plants

Procedia PDF Downloads 457
103 Soil Composition in Different Agricultural Crops under Application of Swine Wastewater

Authors: Ana Paula Almeida Castaldelli Maciel, Gabriela Medeiros, Amanda de Souza Machado, Maria Clara Pilatti, Ralpho Rinaldo dos Reis, Silvio Cesar Sampaio

Abstract:

Sustainable agricultural systems are crucial to ensuring global food security and the long-term production of nutritious food. Comprehensive soil and water management practices, including nutrient management, balanced fertilizer use, and appropriate waste management, are essential for sustainable agriculture. Swine wastewater (SWW) treatment has become a significant focus due to environmental concerns related to heavy metals, antibiotics, resistant pathogens, and nutrients. In South America, small farms use soil to dispose of animal waste, a practice that is expected to increase with global pork production. The potential of SWW as a nutrient source is promising, contributing to global food security, nutrient cycling, and mineral fertilizer reduction. Short- and long-term studies evaluated the effects of SWW on soil and plant parameters, such as nutrients, heavy metals, organic matter (OM), cation exchange capacity (CEC), and pH. Although promising results have been observed in short- and medium-term applications, long-term applications require more attention due to heavy metal concentrations. Organic soil amendment strategies, due to their economic and ecological benefits, are commonly used to reduce the bioavailability of heavy metals. However, the rate of degradation and initial levels of OM must be monitored to avoid changes in soil pH and release of metals. The study aimed to evaluate the long-term effects of SWW application on soil fertility parameters, focusing on calcium (Ca), magnesium (Mg), and potassium (K), in addition to CEC and OM. Experiments were conducted at the Universidade Estadual do Oeste do Paraná, Brazil, using 24 drainage lysimeters for nine years, with different application rates of SWW and mineral fertilization. Principal Component Analysis (PCA) was then conducted to summarize the composite variables, known as principal components (PC), and limit the dimensionality to be evaluated. The retained PCs were then correlated with the original variables to identify the level of association between each variable and each PC. Data were interpreted using Analysis of Variance - ANOVA for general linear models (GLM). As OM was not measured in the 2007 soybean experiment, it was assessed separately from PCA to avoid loss of information. PCA and ANOVA indicated that crop type, SWW, and mineral fertilization significantly influenced soil nutrient levels. Soybeans presented higher concentrations of Ca, Mg, and CEC. The application of SWW influenced K levels, with higher concentrations observed in SWW from biodigesters and higher doses of swine manure. Variability in nutrient concentrations in SWW due to factors such as animal age and feed composition makes standard recommendations challenging. OM levels increased in SWW-treated soils, improving soil fertility and structure. In conclusion, the application of SWW can increase soil fertility and crop productivity, reducing environmental risks. However, careful management and long-term monitoring are essential to optimize benefits and minimize adverse effects.

Keywords: contamination, water research, biodigester, nutrients

Procedia PDF Downloads 37
102 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration

Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.

Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control

Procedia PDF Downloads 236
101 Traumatic Brain Injury Neurosurgical Care Continuum Delays in Mulago Hospital in Kampala Uganda

Authors: Silvia D. Vaca, Benjamin J. Kuo, Joao Ricardo Nickenig Vissoci, Catherine A. Staton, Linda W. Xu, Michael Muhumuza, Hussein Ssenyonjo, John Mukasa, Joel Kiryabwire, Henry E. Rice, Gerald A. Grant, Michael M. Haglund

Abstract:

Background: Patients with traumatic brain injury (TBI) can develop rapid neurological deterioration from swelling and intracranial hematomas, which can result in focal tissue ischemia, brain compression, and herniation. Moreover, delays in management increase the risk of secondary brain injury from hypoxemia and hypotension. Therefore, in TBI patients with subdural hematomas (SDHs) and epidural hematomas (EDHs), surgical intervention is both necessary and time sensitive. Significant delays are seen along the care continuum in low- and middle-income countries (LMICs) largely due to limited healthcare capacity to address the disproportional rates of TBI in Sub Saharan Africa (SSA). While many LMICs have subsidized systems to offset surgical costs, the burden of securing funds by the patients for medications, supplies, and CT diagnostics poses a significant challenge to timely surgical interventions. In Kampala Uganda, the challenge of obtaining timely CT scans is twofold: logistical and financial barriers. These bottlenecks contribute significantly to the care continuum delays and are associated with poor TBI outcomes. Objective: The objectives of this study are to 1) describe the temporal delays through a modified three delays model that fits the context of neurosurgical interventions for TBI patients in Kampala and 2) investigate the association between delays and mortality. Methods: Prospective data were collected for 563 TBI patients presenting to a tertiary hospital in Kampala from 1 June – 30 November 2016. Four time intervals were constructed along five time points: injury, hospital arrival, neurosurgical evaluation, CT results, and definitive surgery. Time interval differences among mild, moderate and severe TBI and their association with mortality were analyzed. Results: The mortality rate of all TBI patients presenting to MNRH was 9.6%, which ranged from 4.7% for mild and moderate TBI patients receiving surgery to 81.8% for severe TBI patients who failed to receive surgery. The duration from injury to surgery varied considerably across TBI severity with the largest gap seen between mild TBI (174 hours) and severe TBI (69 hours) patients. Further analysis revealed care continuum differences for interval 3 (neurosurgical evaluation to CT result) and 4 (CT result to surgery) between severe TBI patients (7 hours for interval 3 and 24 hours for interval 4) and mild TBI patients (19 hours for interval 3, and 96 hours for interval 4). These post-arrival delays were associated with mortality for mild (p=0.05) and moderate TBI (p=0.03) patients. Conclusions: To our knowledge, this is the first analysis using a modified 'three delays' framework to analyze the care continuum of TBI patients in Uganda from injury to surgery. We found significant associations between delays and mortality for mild and moderate TBI patients. As it currently stands, poorer outcomes were observed for these mild and moderate TBI patients who were managed non-operatively or failed to receive surgery while surgical services were shunted to more severely ill patients. While well intentioned, high mortality rates were still observed for the severe TBI patients managed surgically. These results suggest the need for future research to optimize triage practices, understand delay contributors, and improve pre-hospital logistical referral systems.

Keywords: care continuum, global neurosurgery, Kampala Uganda, LMIC, Mulago, traumatic brain injury

Procedia PDF Downloads 197
100 Geotechnical Challenges for the Use of Sand-sludge Mixtures in Covers for the Rehabilitation of Acid-Generating Mine Sites

Authors: Mamert Mbonimpa, Ousseynou Kanteye, Élysée Tshibangu Ngabu, Rachid Amrou, Abdelkabir Maqsoud, Tikou Belem

Abstract:

The management of mine wastes (waste rocks and tailings) containing sulphide minerals such as pyrite and pyrrhotite represents the main environmental challenge for the mining industry. Indeed, acid mine drainage (AMD) can be generated when these wastes are exposed to water and air. AMD is characterized by low pH and high concentrations of heavy metals, which are toxic to plants, animals, and humans. It affects the quality of the ecosystem through water and soil pollution. Different techniques involving soil materials can be used to control AMD generation, including impermeable covers (compacted clays) and oxygen barriers. The latter group includes covers with capillary barrier effects (CCBE), a multilayered cover that include the moisture retention layer playing the role of an oxygen barrier. Once AMD is produced at a mine site, it must be treated so that the final effluent at the mine site complies with regulations and can be discharged into the environment. Active neutralization with lime is one of the treatment methods used. This treatment produces sludge that is usually stored in sedimentation ponds. Other sludge management alternatives have been examined in recent years, including sludge co-disposal with tailings or waste rocks, disposal in underground mine excavations, and storage in technical landfill sites. Considering the ability of AMD neutralization sludge to maintain an alkaline to neutral pH for decades or even centuries, due to the excess alkalinity induced by residual lime within the sludge, valorization of sludge in specific applications could be an interesting management option. If done efficiently, the reuse of sludge could free up storage ponds and thus reduce the environmental impact. It should be noted that mixtures of sludge and soils could potentially constitute usable materials in CCBE for the rehabilitation of acid-generating mine sites, while sludge alone is not suitable for this purpose. The high sludge water content (up to 300%), even after sedimentation, can, however, constitute a geotechnical challenge. Adding lime to the mixtures can reduce the water content and improve the geotechnical properties. The objective of this paper is to investigate the impact of the sludge content (30, 40 and 50%) in sand-sludge mixtures (SSM) on their hydrogeotechnical properties (compaction, shrinkage behaviour, saturated hydraulic conductivity, and water retention curve). The impact of lime addition (dosages from 2% to 6%) on the moisture content, dry density after compaction and saturated hydraulic conductivity of SSM was also investigated. Results showed that sludge adding to sand significantly improves the saturated hydraulic conductivity and water retention capacity, but the shrinkage increased with sludge content. The dry density after compaction of lime-treated SSM increases with the lime dosage but remains lower than the optimal dry density of the untreated mixtures. The saturated hydraulic conductivity of lime-treated SSM after 24 hours of cure decreases by 3 orders of magnitude. Considering the hydrogeotechnical properties obtained with these mixtures, it would be possible to design CCBE whose moisture retention layer is made of SSM. Physical laboratory models confirmed the performance of such CCBE.

Keywords: mine waste, AMD neutralization sludge, sand-sludge mixture, hydrogeotechnical properties, mine site reclamation, CCBE

Procedia PDF Downloads 32