Search results for: pollution load
53 Targeting Peptide Based Therapeutics: Integrated Computational and Experimental Studies of Autophagic Regulation in Host-Parasite Interaction
Authors: Vrushali Guhe, Shailza Singh
Abstract:
Cutaneous leishmaniasis is neglected tropical disease present worldwide caused by the protozoan parasite Leishmania major, the therapeutic armamentarium for leishmaniasis are showing several limitations as drugs are showing toxic effects with increasing resistance by a parasite. Thus identification of novel therapeutic targets is of paramount importance. Previous studies have shown that autophagy, a cellular process, can either facilitate infection or aid in the elimination of the parasite, depending on the specific parasite species and host background in leishmaniasis. In the present study, our objective was to target the essential autophagy protein ATG8, which plays a crucial role in the survival, infection dynamics, and differentiation of the Leishmania parasite. ATG8 in Leishmania major and its homologue, LC3, in Homo sapiens, act as autophagic markers. Present study manifested the crucial role of ATG8 protein as a potential target for combating Leishmania major infection. Through bioinformatics analysis, we identified non-conserved motifs within the ATG8 protein of Leishmania major, which are not present in LC3 of Homo sapiens. Against these two non-conserved motifs, we generated a peptide library of 60 peptides on the basis of physicochemical properties. These peptides underwent a filtering process based on various parameters, including feasibility of synthesis and purification, compatibility with Selective Reaction Monitoring (SRM)/Multiple reaction monitoring (MRM), hydrophobicity, hydropathy index, average molecular weight (Mw average), monoisotopic molecular weight (Mw monoisotopic), theoretical isoelectric point (pI), and half-life. Further filtering criterion shortlisted three peptides by using molecular docking and molecular dynamics simulations. The direct interaction between ATG8 and the shortlisted peptides was confirmed through Surface Plasmon Resonance (SPR) experiments. Notably, these peptides exhibited the remarkable ability to penetrate the parasite membrane and exert profound effects on Leishmania major. The treatment with these peptides significantly impacted parasite survival, leading to alterations in the cell cycle and morphology. Furthermore, the peptides were found to modulate autophagosome formation, particularly under starved conditions, suggesting their involvement in disrupting the regulation of autophagy within Leishmania major. In vitro, studies demonstrated that the selected peptides effectively reduced the parasite load within infected host cells. Encouragingly, these findings were corroborated by in vivo experiments, which showed a reduction in parasite burden upon peptide administration. Additionally, the peptides were observed to affect the levels of LC3II within host cells. In conclusion, our findings highlight the efficacy of these novel peptides in targeting Leishmania major’s ATG8 and disrupting parasite survival. These results provide valuable insights into the development of innovative therapeutic strategies against leishmaniasis via targeting autophagy protein ATG8 of Leishmania major.Keywords: ATG8, leishmaniasis, surface plasmon resonance, MD simulation, molecular docking, peptide designing, therapeutics
Procedia PDF Downloads 8052 Clinical Presentation and Immune Response to Intramammary Infection of Holstein-Friesian Heifers with Isolates from Two Staphylococcus aureus Lineages
Authors: Dagmara A. Niedziela, Mark P. Murphy, Orla M. Keane, Finola C. Leonard
Abstract:
Staphylococcus aureus is the most frequent cause of clinical and subclinical bovine mastitis in Ireland. Mastitis caused by S. aureus is often chronic and tends to recur after antibiotic treatment. This may be due to several virulence factors, including attributes that enable the bacterium to internalize into bovine mammary epithelial cells, where it may evade antibiotic treatment, or evade the host immune response. Four bovine-adapted lineages (CC71, CC97, CC151 and ST136) were identified among a collection of Irish S. aureus mastitis isolates. Genotypic variation of mastitis-causing strains may contribute to different presentations of the disease, including differences in milk somatic cell count (SCC), the main method of mastitis detection. The objective of this study was to investigate the influence of bacterial strain and lineage on host immune response, by employing cell culture methods in vitro as well as an in vivo infection model. Twelve bovine adapted S. aureus strains were examined for internalization into bovine mammary epithelial cells (bMEC) and their ability to induce an immune response from bMEC (using qPCR and ELISA). In vitro studies found differences in a variety of virulence traits between the lineages. Strains from lineages CC97 and CC71 internalized more efficiently into bovine mammary epithelial cells (bMEC) than CC151 and ST136. CC97 strains also induced immune genes in bMEC more strongly than strains from the other 3 lineages. One strain each of CC151 and CC97 that differed in their ability to cause an immune response in bMEC were selected on the basis of the above in vitro experiments. Fourteen first-lactation Holstein-Friesian cows were purchased from 2 farms on the basis of low SCC (less than 50 000 cells/ml) and infection free status. Seven cows were infected with 1.73 x 102 c.f.u. of the CC97 strain (Group 1) and another seven with 5.83 x 102 c.f.u. of the CC151 strain (Group 2). The contralateral quarter of each cow was inoculated with PBS (vehicle). Clinical signs of infection (temperature, milk and udder appearance, milk yield) were monitored for 30 days. Blood and milk samples were taken to determine bacterial counts in milk, SCC, white blood cell populations and cytokines. Differences in disease presentation in vivo between groups were observed, with two animals from Group 2 developing clinical mastitis and requiring antibiotic treatment, while one animal from Group 1 did not develop an infection for the duration of the study. Fever (temperature > 39.5⁰C) was observed in 3 animals from Group 2 and in none from Group 1. Significant differences in SCC and bacterial load between groups were observed in the initial stages of infection (week 1). Data is also being collected on cytokines and chemokines secreted during the course of infection. The results of this study suggest that a strain from lineage CC151 may cause more severe clinical mastitis, while a strain from lineage CC97 may cause mild, subclinical mastitis. Diversity between strains of S. aureus may therefore influence the clinical presentation of mastitis, which in turn may influence disease detection and treatment needs.Keywords: Bovine mastitis, host immune response, host-pathogen interactions, Staphylococcus aureus
Procedia PDF Downloads 15751 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing
Procedia PDF Downloads 17550 A Community Solution to Address Extensive Nitrate Contamination in the Lower Yakima Valley Aquifer
Authors: Melanie Redding
Abstract:
Historic widespread nitrate contamination of the Lower Yakima Valley aquifer in Washington State initiated a community-based effort to reduce nitrate concentrations to below-drinking water standards. This group commissioned studies on characterizing local nitrogen sources, deep soil assessments, drinking water, and assessing nitrate concentrations at the water table. Nitrate is the most prevalent groundwater contaminant with common sources from animal and human waste, fertilizers, plants and precipitation. It is challenging to address groundwater contamination when common sources, such as agriculture, on-site sewage systems, and animal production, are widespread. Remediation is not possible, so mitigation is essential. The Lower Yakima Valley is located over 175,000 acres, with a population of 56,000 residents. Approximately 25% of the population do not have access to safe, clean drinking water, and 20% of the population is at or below the poverty level. Agriculture is the primary economic land-use activity. Irrigated agriculture and livestock production make up the largest percentage of acreage and nitrogen load. Commodities include apples, grapes, hops, dairy, silage corn, triticale, alfalfa and cherries. These commodities are important to the economic viability of the residents of the Lower Yakima Valley, as well as Washington State. Mitigation of nitrate in groundwater is challenging. The goal is to ensure everyone has safe drinking water. There are no easy remedies due to the extensive and pervasiveness of the contamination. Monitoring at the water table indicates that 45% of the 30 spatially distributed monitoring wells exceeded the drinking water standard. This indicates that there are multiple sources that are impacting water quality. Washington State has several areas which have extensive groundwater nitrate contamination. The groundwater in these areas continues to degrade over time. However, the Lower Yakima Valley is being successful in addressing this health issue because of the following reasons: the community is engaged and committed; there is one common goal; there has been extensive public education and outreach to citizens; and generating credible data using sound scientific methods. Work in this area is continuing as an ambient groundwater monitoring network is established to assess the condition of the aquifer over time. Nitrate samples are being collected from 170 wells, spatially distributed across the aquifer. This research entails quarterly sampling for two years to characterize seasonal variability and then continue annually afterward. This assessment will provide the data to statistically determine trends in nitrate concentrations across the aquifer, over time. Thirty-three of these wells are monitoring wells that are screened across the aquifer. The water quality from these wells are indicative of activities at the land surface. Additional work is being conducted to identify land use management practices that are effective in limiting nitrate migration through the soil column. Tracking nitrate in the soil column every season is an important component of bridging land-use practices with the fate and transport of nitrate through the subsurface. Patience, tenacity, and the ability to think outside the box are essential for dealing with widespread nitrate contamination of groundwater.Keywords: community, groundwater, monitoring, nitrate
Procedia PDF Downloads 17749 Environmental Impacts Assessment of Power Generation via Biomass Gasification Systems: Life Cycle Analysis (LCA) Approach for Tars Release
Authors: Grâce Chidikofan, François Pinta, A. Benoist, G. Volle, J. Valette
Abstract:
Statement of the Problem: biomass gasification systems may be relevant for decentralized power generation from recoverable agricultural and wood residues available in rural areas. In recent years, many systems have been implemented in all over the world as especially in Cambodgia, India. Although they have many positive effects, these systems can also affect the environment and human health. Indeed, during the process of biomass gasification, black wastewater containing tars are produced and generally discharged in the local environment either into the rivers or on soil. However, in most environmental assessment studies of biomass gasification systems, the impact of these releases are underestimated, due to the difficulty of identification of their chemical substances. This work deal with the analysis of the environmental impacts of tars from wood gasification in terms of human toxicity cancer effect, human toxicity non-cancer effect, and freshwater ecotoxicity. Methodology: A Life Cycle Assessment (LCA) approach was adopted. The inventory of tars chemicals substances was based on experimental data from a downdraft gasification system. The composition of six samples from two batches of raw materials: one batch made of tree wood species (oak+ plane tree +pine) at 25 % moisture content and the second batch made of oak at 11% moisture content. The tests were carried out for different gasifier load rates, respectively in the range 50-75% and 50-100%. To choose the environmental impacts assessment method, we compared the methods available in SIMAPRO tool (8.2.0) which are taking into account most of the chemical substances. The environmental impacts for 1kg of tars discharged were characterized by ILCD 2011+ method (V.1.08). Findings Experimental results revealed 38 important chemical substances in varying proportion from one test to another. Only 30 are characterized by ILCD 2011+ method, which is one of the best performing methods. The results show that wood species or moisture content have no significant impact on human toxicity noncancer effect (HTNCE) and freshwater ecotoxicity (FWE) for water release. For human toxicity cancer effect (HTCE), a small gap is observed between impact factors of the two batches, either 3.08E-7 CTUh/kg against 6.58E-7 CTUh/kg. On the other hand, it was found that the risk of negative effects is higher in case of tar release into water than on soil for all impact categories. Indeed, considering the set of samples, the average impact factor obtained for HTNCE varies respectively from 1.64 E-7 to 1.60E-8 CTUh/kg. For HTCE, the impact factor varies between 4.83E-07 CTUh/kg and 2.43E-08 CTUh/kg. The variability of those impact factors is relatively low for these two impact categories. Concerning FWE, the variability of impact factor is very high. It is 1.3E+03 CTUe/kg for tars release into water against 2.01E+01 CTUe/kg for tars release on soil. Statement concluding: The results of this study show that the environmental impacts of tars emission of biomass gasification systems can be consequent and it is important to investigate the ways to reduce them. For environmental research, these results represent an important step of a global environmental assessment of the studied systems. It could be used to better manage the wastewater containing tars to reduce as possible the impacts of numerous still running systems all over the world.Keywords: biomass gasification, life cycle analysis, LCA, environmental impact, tars
Procedia PDF Downloads 28048 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock
Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer
Abstract:
Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.Keywords: redensification, SME, urban development, wood building system
Procedia PDF Downloads 11147 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study
Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji
Abstract:
Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.Keywords: phase change materials, air-PCM exchangers, convection, conduction
Procedia PDF Downloads 17846 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface
Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis
Abstract:
In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.Keywords: polymorphic, locomotion, sinusoidal curves, parametric
Procedia PDF Downloads 10545 Partnering With Key Stakeholders for Successful Implementation of Inhaled Analgesia for Specific Emergency Department Presentations
Authors: Sarah Hazelwood, Janice Hay
Abstract:
Methoxyflurane is an inhaled analgesic administered via a disposable inhaler, which has been used in Australia for 40 years for the management of pain in children & adults. However, there is a lack of data for methoxyflurane as a frontline analgesic medication within the emergency department (ED). This study will investigate the usefulness of methoxyflurane in a private inner-city ED. The study concluded that the inclusion of all key stakeholders in the prescribing, administering & use of this new process led to comprehensive uptake & vastly positive outcomes for consumer & health professionals. Method: A 12-week prospective pilot study was completed utilizing patients presenting to the ED in pain (numeric pain rating score > 4) that fit the requirement of methoxyflurane use (as outlined in the Australian Prescriber information package). Nurses completed a formatted spreadsheet for each interaction where methoxyflurane was used. Patient demographics, day, time, initial numeric pain score, analgesic response time, the reason for use, staff concern (free text), & patient feedback (free text), & discharge time was documented. When clinical concern was raised, the researcher retrieved & reviewed patient notes. Results: 140 methoxyflurane inhalers were used. 60% of patients were 31 years of age & over (n=82) with 16% aged 70+. The gender split; 51% male: 49% female. Trauma-related pain (57%) saw the highest use of administration, with the evening hours (1500-2259) seeing the greatest numbers used (39%). Tuesday, Thursday & Sunday shared the highest daily use throughout the study. A minimum numerical pain score of 4/10 (n=13, 9%), with the ranges of 5 - 7/10 (moderate pain) being given by almost 50% of patients. Only 3 instances of pain scores increased post use of methoxyflurane (all other entries showed pain score < initial rating). Patients & staff noted obvious analgesic response within 3 minutes (n= 96, 81%, of administration). Nurses documented a change in patient vital signs for 4 of the 15 patient-related concerns; the remaining concerns were due to “gagging” on the taste, or “having a coughing episode”; one patient tried to leave the department before the procedure was attended (very euphoric state). Upon review of the staff concerns – no adverse events occurred & return to therapeutic vitals occurred within 10 minutes. Length of stay for patients was compared with similar presentations (such as dislocated shoulder or ankle fracture) & saw an average 40-minute decrease in time to discharge. Methoxyflurane treatment was rated “positively” by > 80% of patients – with remaining feedback related to mild & transient concerns. Staff similarly noted a positive response to methoxyflurane as an analgesic & as an added tool for frontline analgesic purposes. Conclusion: Methoxyflurane should be used on suitable patient presentations requiring immediate, short term pain relief. As a highly portable, non-narcotic avenue to treat pain this study showed obvious therapeutic benefit, positive feedback, & a shorter length of stay in the ED. By partnering with key stake holders, this study determined methoxyflurane use decreased work load, decreased wait time to analgesia, and increased patient satisfaction.Keywords: analgesia, benefits, emergency, methoxyflurane
Procedia PDF Downloads 12344 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas
Authors: Julien Caudeville, Muriel Ismert
Abstract:
Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.Keywords: health risk, environment, composite indicator, hotspot areas
Procedia PDF Downloads 24743 Simple Finite-Element Procedure for Modeling Crack Propagation in Reinforced Concrete Bridge Deck under Repetitive Moving Truck Wheel Loads
Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom
Abstract:
Modeling cracks in concrete is complicated by its strain-softening behavior which requires the use of sophisticated energy criteria of fracture mechanics to assure stable and convergent solutions in the finite-element (FE) analysis particularly for relatively large structures. However, for small-scale structures such as beams and slabs, a simpler approach relies on retaining some shear stiffness in the cracking plane has been adopted in literature to model the strain-softening behavior of concrete under monotonically increased loading. According to the shear retaining approach, each element is assumed to be an isotropic material prior to cracking of concrete. Once an element is cracked, the isotropic element is replaced with an orthotropic element in which the new orthotropic stiffness matrix is formulated with respect to the crack orientation. The shear transfer factor of 0.5 is used in parallel to the crack plane. The shear retaining approach is adopted in this research to model cracks in RC bridge deck with some modifications to take into account the effect of repetitive moving truck wheel loads as they cause fatigue cracking of concrete. First modification is the introduction of fatigue tests of concrete and reinforcing steel and the Palmgren-Miner linear criterion of cumulative damage in the conventional FE analysis. For a certain loading, the number of cycles to failure of each concrete or RC element can be calculated from the fatigue or S-N curves of concrete and reinforcing steel. The elements with the minimum number of cycles to failure are the failed elements. For the elements that do not fail, the damage is accumulated according to Palmgren-Miner linear criterion of cumulative damage. The stiffness of the failed element is modified and the procedure is repeated until the deck slab fails. The total number of load cycles to failure of the deck slab can then be obtained from which the S-N curve of the deck slab can be simulated. Second modification is the modification in shear transfer factor. Moving loading causes continuous rubbing of crack interfaces which greatly reduces shear transfer mechanism. It is therefore conservatively assumed in this study that the analysis is conducted with shear transfer factor of zero for the case of moving loading. A customized FE program has been developed using the MATLAB software to accomodate such modifications. The developed procedure has been validated with the fatigue test of the 1/6.6-scale AASHTO bridge deck under the applications of both fixed-point repetitive loading and moving loading presented in the literature. Results are in good agreement both experimental vs. simulated S-N curves and observed vs. simulated crack patterns. Significant contribution of the developed procedure is a series of S-N relations which can now be simulated at any desired levels of cracking in addition to the experimentally derived S-N relation at the failure of the deck slab. This permits the systematic investigation of crack propagation or deterioration of RC bridge deck which is appeared to be useful information for highway agencies to prolong the life of their bridge decks.Keywords: bridge deck, cracking, deterioration, fatigue, finite-element, moving truck, reinforced concrete
Procedia PDF Downloads 25742 Experimental Study on Granulated Steel Slag as an Alternative to River Sand
Authors: K. Raghu, M. N. Vathhsala, Naveen Aradya, Sharth
Abstract:
River sand is the most preferred fine aggregate for mortar and concrete. River sand is a product of natural weathering of rocks over a period of millions of years and is mined from river beds. Sand mining has disastrous environmental consequences. The excessive mining of river bed is creating an ecological imbalance. This has lead to have restrictions imposed by ministry of environment on sand mining. Driven by the acute need for sand, stone dust or manufactured sand prepared from the crushing and screening of coarse aggregate is being used as sand in the recent past. However manufactured sand is also a natural material and has quarrying and quality issues. To reduce the burden on the environment, alternative materials to be used as fine aggregates are being extensively investigated all over the world. Looking to the quantum of requirements, quality and properties there has been a global consensus on a material – Granulated slags. Granulated slag has been proven as a suitable material for replacing natural sand / crushed fine aggregates. In developed countries, the use of granulated slag as fine aggregate to replace natural sand is well established and is in regular practice. In the present paper Granulated slag has been experimented for usage in mortar. Slags are the main by-products generated during iron and steel production in the steel industry. Over the past decades, the steel production has increased and, consequently, the higher volumes of by-products and residues generated which have driven to the reuse of these materials in an increasingly efficient way. In recent years new technologies have been developed to improve the recovery rates of slags. Increase of slags recovery and use in different fields of applications like cement making, construction and fertilizers help in preserving natural resources. In addition to the environment protection, these practices produced economic benefits, by providing sustainable solutions that can allow the steel industry to achieve its ambitious targets of “zero waste” in coming years. Slags are generated at two different stages of steel production, iron making and steel making known as BF(Blast Furnace) slag and steel slag respectively. The slagging agent or fluxes, such as lime stone, dolomite and quartzite added into BF or steel making furnaces in order to remove impurities from ore, scrap and other ferrous charges during smelting. The slag formation is the result of a complex series of physical and chemical reactions between the non-metallic charge(lime stone, dolomite, fluxes), the energy sources(coal, coke, oxygen, etc.) and refractory materials. Because of the high temperatures (about 15000 C) during their generation, slags do not contain any organic substances. Due to the fact that slags are lighter than the liquid metal, they float and get easily removed. The slags protect the metal bath from atmosphere and maintain temperature through a kind of liquid formation. These slags are in liquid state and solidified in air after dumping in the pit or granulated by impinging water systems. Generally, BF slags are granulated and used in cement making due to its high cementious properties, and steel slags are mostly dumped due to unfavourable physio-chemical conditions. The increasing dump of steel slag not only occupies a plenty of land but also wastes resources and can potentially have an impact on the environment due to water pollution. Since BF slag contains little Fe and can be used directly. BF slag has found a wide application, such as cement production, road construction, Civil Engineering work, fertilizer production, landfill daily cover, soil reclamation, prior to its application outside the iron and steel making process.Keywords: steel slag, river sand, granulated slag, environmental
Procedia PDF Downloads 24441 Decision Making on Smart Energy Grid Development for Availability and Security of Supply Achievement Using Reliability Merits
Authors: F. Iberraken, R. Medjoudj, D. Aissani
Abstract:
The development of the smart grids concept is built around two separate definitions, namely: The European one oriented towards sustainable development and the American one oriented towards reliability and security of supply. In this paper, we have investigated reliability merits enabling decision-makers to provide a high quality of service. It is based on system behavior using interruptions and failures modeling and forecasting from one hand and on the contribution of information and communication technologies (ICT) to mitigate catastrophic ones such as blackouts from the other hand. It was found that this concept has been adopted by developing and emerging countries in short and medium terms followed by sustainability concept at long term planning. This work has highlighted the reliability merits such as: Benefits, opportunities, costs and risks considered as consistent units of measuring power customer satisfaction. From the decision making point of view, we have used the analytic hierarchy process (AHP) to achieve customer satisfaction, based on the reliability merits and the contribution of such energy resources. Certainly nowadays, fossil and nuclear ones are dominating energy production but great advances are already made to jump into cleaner ones. It was demonstrated that theses resources are not only environmentally but also economically and socially sustainable. The paper is organized as follows: Section one is devoted to the introduction, where an implicit review of smart grids development is given for the two main concepts (for USA and Europeans countries). The AHP method and the BOCR developments of reliability merits against power customer satisfaction are developed in section two. The benefits where expressed by the high level of availability, maintenance actions applicability and power quality. Opportunities were highlighted by the implementation of ICT in data transfer and processing, the mastering of peak demand control, the decentralization of the production and the power system management in default conditions. Costs were evaluated using cost-benefit analysis, including the investment expenditures in network security, becoming a target to hackers and terrorists, and the profits of operating as decentralized systems, with a reduced energy not supplied, thanks to the availability of storage units issued from renewable resources and to the current power lines (CPL) enabling the power dispatcher to manage optimally the load shedding. For risks, we have razed the adhesion of citizens to contribute financially to the system and to the utility restructuring. What is the degree of their agreement compared to the guarantees proposed by the managers about the information integrity? From technical point of view, have they sufficient information and knowledge to meet a smart home and a smart system? In section three, an application of AHP method is made to achieve power customer satisfaction based on the main energy resources as alternatives, using knowledge issued from a country that has a great advance in energy mutation. Results and discussions are given in section four. It was given us to conclude that the option to a given resource depends on the attitude of the decision maker (prudent, optimistic or pessimistic), and that status quo is neither sustainable nor satisfactory.Keywords: reliability, AHP, renewable energy resources, smart grids
Procedia PDF Downloads 44240 A Next-Generation Pin-On-Plate Tribometer for Use in Arthroplasty Material Performance Research
Authors: Lewis J. Woollin, Robert I. Davidson, Paul Watson, Philip J. Hyde
Abstract:
Introduction: In-vitro testing of arthroplasty materials is of paramount importance when ensuring that they can withstand the performance requirements encountered in-vivo. One common machine used for in-vitro testing is a pin-on-plate tribometer, an early stage screening device that generates data on the wear characteristics of arthroplasty bearing materials. These devices test vertically loaded rotating cylindrical pins acting against reciprocating plates, representing the bearing surfaces. In this study, a pin-on-plate machine has been developed that provides several improvements over current technology, thereby progressing arthroplasty bearing research. Historically, pin-on-plate tribometers have been used to investigate the performance of arthroplasty bearing materials under conditions commonly encountered during a standard gait cycle; nominal operating pressures of 2-6 MPa and an operating frequency of 1 Hz are typical. There has been increased interest in using pin-on-plate machines to test more representative in-vivo conditions, due to the drive to test 'beyond compliance', as well as their testing speed and economic advantages over hip simulators. Current pin-on-plate machines do not accommodate the increased performance requirements associated with more extreme kinematic conditions, therefore a next-generation pin-on-plate tribometer has been developed to bridge the gap between current technology and future research requirements. Methodology: The design was driven by several physiologically relevant requirements. Firstly, an increased loading capacity was essential to replicate the peak pressures that occur in the natural hip joint during running and chair-rising, as well as increasing the understanding of wear rates in obese patients. Secondly, the introduction of mid-cycle load variation was of paramount importance, as this allows for an approximation of the loads present in a gait cycle to be applied and to test the fatigue properties of materials. Finally, the rig must be validated against previous-generation pin-on-plate and arthroplasty wear data. Results: The resulting machine is a twelve station device that is split into three sets of four stations, providing an increased testing capacity compared to most current pin-on-plate tribometers. The loading of the pins is generated using a pneumatic system, which can produce contact pressures of up to 201 MPa on a 3.2 mm² round pin face. This greatly exceeds currently achievable contact pressures in literature and opens new research avenues such as testing rim wear of mal-positioned hip implants. Additionally, the contact pressure of each set can be changed independently of the others, allowing multiple loading conditions to be tested simultaneously. Using pneumatics also allows the applied pressure to be switched ON/OFF mid-cycle, another feature not currently reported elsewhere, which allows for investigation into intermittent loading and material fatigue. The device is currently undergoing a series of validation tests using Ultra-High-Molecular-Weight-Polyethylene pins and 316L Stainless Steel Plates (polished to a Ra < 0.05 µm). The operating pressures will be between 2-6 MPa, operating at 1 Hz, allowing for validation of the machine against results reported previously in the literature. The successful production of this next-generation pin-on-plate tribometer will, following its validation, unlock multiple previously unavailable research avenues.Keywords: arthroplasty, mechanical design, pin-on-plate, total joint replacement, wear testing
Procedia PDF Downloads 9439 Pre-conditioning and Hot Water Sanitization of Reverse Osmosis Membrane for Medical Water Production
Authors: Supriyo Das, Elbir Jove, Ajay Singh, Sophie Corbet, Noel Carr, Martin Deetz
Abstract:
Water is a critical commodity in the healthcare and medical field. The utility of medical-grade water spans from washing surgical equipment, drug preparation to the key element of life-saving therapy such as hydrotherapy and hemodialysis for patients. A properly treated medical water reduces the bioburden load and mitigates the risk of infection, ensuring patient safety. However, any compromised condition during the production of medical-grade water can create a favorable environment for microbial growth putting patient safety at high risk. Therefore, proper upstream treatment of the medical water is essential before its application in healthcare, pharma and medical space. Reverse Osmosis (RO) is one of the most preferred treatments within healthcare industries and is recommended by all International Pharmacopeias to achieve the quality level demanded by global regulatory bodies. The RO process can remove up to 99.5% of constituents from feed water sources, eliminating bacteria, proteins and particles sizes of 100 Dalton and above. The combination of RO with other downstream water treatment technologies such as Electrodeionization and Ultrafiltration meet the quality requirements of various pharmacopeia monographs to produce highly purified water or water for injection for medical use. In the reverse osmosis process, the water from a liquid with a high concentration of dissolved solids is forced to flow through an especially engineered semi-permeable membrane to the low concentration side, resulting in high-quality grade water. However, these specially engineered RO membranes need to be sanitized either chemically or at high temperatures at regular intervals to keep the bio-burden at the minimum required level. In this paper, we talk about Dupont´s FilmTec Heat Sanitizable Reverse Osmosis membrane (HSRO) for the production of medical-grade water. An HSRO element must be pre-conditioned prior to initial use by exposure to hot water (80°C-85°C) for its stable performance and to meet the manufacturer’s specifications. Without pre-conditioning, the membrane will show variations in feed pressure operations and salt rejection. The paper will discuss the critical variables of pre-conditioning steps that can affect the overall performance of the HSRO membrane and demonstrate the data to support the need for pre-conditioning of HSRO elements. Our preliminary data suggests that there can be up to 35 % reduction in flow due to initial heat treatment, which also positively affects the increase in salt rejection. The paper will go into detail about the fundamental understanding of the performance change of HSRO after the pre-conditioning step and its effect on the quality of medical water produced. The paper will also discuss another critical point, “regular hot water sanitization” of these HSRO membranes. Regular hot water sanitization (at 80°C-85°C) is necessary to keep the membrane bioburden free; however, it can negatively impact the performance of the membrane over time. We will demonstrate several data points on hot water sanitization using FilmTec HSRO elements and challenge its robustness to produce quality medical water. The last part of this paper will discuss the construction details of the FilmTec HSRO membrane and features that make it suitable to pre-condition and sanitize at high temperatures.Keywords: heat sanitizable reverse osmosis, HSRO, medical water, hemodialysis water, water for Injection, pre-conditioning, heat sanitization
Procedia PDF Downloads 21138 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 15937 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level
Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni
Abstract:
In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.Keywords: tropocollagen, multiscale model, fibrils, knee ligaments
Procedia PDF Downloads 12836 Fair Federated Learning in Wireless Communications
Authors: Shayan Mohajer Hamidi
Abstract:
Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization
Procedia PDF Downloads 7535 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis
Procedia PDF Downloads 24634 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS
Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert
Abstract:
The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF
Procedia PDF Downloads 16033 Multifield Problems in 3D Structural Analysis of Advanced Composite Plates and Shells
Authors: Salvatore Brischetto, Domenico Cesare
Abstract:
Major improvements in future aircraft and spacecraft could be those dependent on an increasing use of conventional and unconventional multilayered structures embedding composite materials, functionally graded materials, piezoelectric or piezomagnetic materials, and soft foam or honeycomb cores. Layers made of such materials can be combined in different ways to obtain structures that are able to fulfill several structural requirements. The next generation of aircraft and spacecraft will be manufactured as multilayered structures under the action of a combination of two or more physical fields. In multifield problems for multilayered structures, several physical fields (thermal, hygroscopic, electric and magnetic ones) interact each other with different levels of influence and importance. An exact 3D shell model is here proposed for these types of analyses. This model is based on a coupled system including 3D equilibrium equations, 3D Fourier heat conduction equation, 3D Fick diffusion equation and electric and magnetic divergence equations. The set of partial differential equations of second order in z is written using a mixed curvilinear orthogonal reference system valid for spherical and cylindrical shell panels, cylinders and plates. The order of partial differential equations is reduced to the first one thanks to the redoubling of the number of variables. The solution in the thickness z direction is obtained by means of the exponential matrix method and the correct imposition of interlaminar continuity conditions in terms of displacements, transverse stresses, electric and magnetic potentials, temperature, moisture content and transverse normal multifield fluxes. The investigated structures have simply supported sides in order to obtain a closed form solution in the in-plane directions. Moreover, a layerwise approach is proposed which allows a 3D correct description of multilayered anisotropic structures subjected to field loads. Several results will be proposed in tabular and graphical formto evaluate displacements, stresses and strains when mechanical loads, temperature gradients, moisture content gradients, electric potentials and magnetic potentials are applied at the external surfaces of the structures in steady-state conditions. In the case of inclusions of piezoelectric and piezomagnetic layers in the multilayered structures, so called smart structures are obtained. In this case, a free vibration analysis in open and closed circuit configurations and a static analysis for sensor and actuator applications will be proposed. The proposed results will be useful to better understand the physical and structural behaviour of multilayered advanced composite structures in the case of multifield interactions. Moreover, these analytical results could be used as reference solutions for those scientists interested in the development of 3D and 2D numerical shell/plate models based, for example, on the finite element approach or on the differential quadrature methodology. The correct impositions of boundary geometrical and load conditions, interlaminar continuity conditions and the zigzag behaviour description due to transverse anisotropy will be also discussed and verified.Keywords: composite structures, 3D shell model, stress analysis, multifield loads, exponential matrix method, layer wise approach
Procedia PDF Downloads 6732 Wind Turbine Scaling for the Investigation of Vortex Shedding and Wake Interactions
Authors: Sarah Fitzpatrick, Hossein Zare-Behtash, Konstantinos Kontis
Abstract:
Traditionally, the focus of horizontal axis wind turbine (HAWT) blade aerodynamic optimisation studies has been the outer working region of the blade. However, recent works seek to better understand, and thus improve upon, the performance of the inboard blade region to enhance power production, maximise load reduction and better control the wake behaviour. This paper presents the design considerations and characterisation of a wind turbine wind tunnel model devised to further the understanding and fundamental definition of horizontal axis wind turbine root vortex shedding and interactions. Additionally, the application of passive and active flow control mechanisms – vortex generators and plasma actuators – to allow for the manipulation and mitigation of unsteady aerodynamic behaviour at the blade inboard section is investigated. A static, modular blade wind turbine model has been developed for use in the University of Glasgow’s de Havilland closed return, low-speed wind tunnel. The model components - which comprise of a half span blade, hub, nacelle and tower - are scaled using the equivalent full span radius, R, for appropriate Mach and Strouhal numbers, and to achieve a Reynolds number in the range of 1.7x105 to 5.1x105 for operational speeds up to 55m/s. The half blade is constructed to be modular and fully dielectric, allowing for the integration of flow control mechanisms with a focus on plasma actuators. Investigations of root vortex shedding and the subsequent wake characteristics using qualitative – smoke visualisation, tufts and china clay flow – and quantitative methods – including particle image velocimetry (PIV), hot wire anemometry (HWA), and laser Doppler anemometry (LDA) – were conducted over a range of blade pitch angles 0 to 15 degrees, and Reynolds numbers. This allowed for the identification of shed vortical structures from the maximum chord position, the transitional region where the blade aerofoil blends into a cylindrical joint, and the blade nacelle connection. Analysis of the trailing vorticity interactions between the wake core and freestream shows the vortex meander and diffusion is notably affected by the Reynold’s number. It is hypothesized that the shed vorticity from the blade root region directly influences and exacerbates the nacelle wake expansion in the downstream direction. As the design of inboard blade region form is, by necessity, driven by function rather than aerodynamic optimisation, a study is undertaken for the application of flow control mechanisms to manipulate the observed vortex phenomenon. The designed model allows for the effective investigation of shed vorticity and wake interactions with a focus on the accurate geometry of a root region which is representative of small to medium power commercial HAWTs. The studies undertaken allow for an enhanced understanding of the interplay of shed vortices and their subsequent effect in the near and far wake. This highlights areas of interest within the inboard blade area for the potential use of passive and active flow control devices which contrive to produce a more desirable wake quality in this region.Keywords: vortex shedding, wake interactions, wind tunnel model, wind turbine
Procedia PDF Downloads 23531 Circular Nitrogen Removal, Recovery and Reuse Technologies
Authors: Lina Wu
Abstract:
The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and threatens water quality. Nitrogen pollution control has become a global concern. The concentration of nitrogen in water is reduced by converting ammonia nitrogen, nitrate nitrogen and nitrite nitrogen into nitrogen-containing gas through biological treatment, physicochemical treatment and oxidation technology. However, some wastewater containing high ammonia nitrogen including landfill leachate, is difficult to be treated by traditional nitrification and denitrification because of its high COD content. The core process of denitrification is that denitrifying bacteria convert nitrous acid produced by nitrification into nitrite under anaerobic conditions. Still, its low-carbon nitrogen does not meet the conditions for denitrification. Many studies have shown that the natural autotrophic anammox bacteria can combine nitrous and ammonia nitrogen without a carbon source through functional genes to achieve total nitrogen removal, which is very suitable for removing nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The short-range nitrification and denitrification coupled with anaerobic ammoX ensures total nitrogen removal. It improves the removal efficiency, meeting the needs of society for an ecologically friendly and cost-effective nutrient removal treatment technology. In recent years, research has found that the symbiotic system has more water treatment advantages because this process not only helps to improve the efficiency of wastewater treatment but also allows carbon dioxide reduction and resource recovery. Microalgae use carbon dioxide dissolved in water or released through bacterial respiration to produce oxygen for bacteria through photosynthesis under light, and bacteria, in turn, provide metabolites and inorganic carbon sources for the growth of microalgae, which may lead the algal bacteria symbiotic system save most or all of the aeration energy consumption. It has become a trend to make microalgae and light-avoiding anammox bacteria play synergistic roles by adjusting the light-to-dark ratio. Microalgae in the outer layer of light particles block most of the light and provide cofactors and amino acids to promote nitrogen removal. In particular, myxoccota MYX1 can degrade extracellular proteins produced by microalgae, providing amino acids for the entire bacterial community, which helps anammox bacteria save metabolic energy and adapt to light. As a result, initiating and maintaining the process of combining dominant algae and anaerobic denitrifying bacterial communities has great potential in treating landfill leachate. Chlorella has a brilliant removal effect and can withstand extreme environments in terms of high ammonia nitrogen, high salt and low temperature. It is urgent to study whether the algal mud mixture rich in denitrifying bacteria and chlorella can greatly improve the efficiency of landfill leachate treatment under an anaerobic environment where photosynthesis is stopped. The optimal dilution concentration of simulated landfill leachate can be found by determining the treatment effect of the same batch of bacteria and algae mixtures under different initial ammonia nitrogen concentrations and making a comparison. High-throughput sequencing technology was used to analyze the changes in microbial diversity, related functional genera and functional genes under optimal conditions, providing a theoretical and practical basis for the engineering application of novel bacteria-algae symbiosis system in biogas slurry treatment and resource utilization.Keywords: nutrient removal and recovery, leachate, anammox, Partial nitrification, Algae-bacteria interaction
Procedia PDF Downloads 4030 Industrial Waste to Energy Technology: Engineering Biowaste as High Potential Anode Electrode for Application in Lithium-Ion Batteries
Authors: Pejman Salimi, Sebastiano Tieuli, Somayeh Taghavi, Michela Signoretto, Remo Proietti Zaccaria
Abstract:
Increasing the growth of industrial waste due to the large quantities of production leads to numerous environmental and economic challenges, such as climate change, soil and water contamination, human disease, etc. Energy recovery of waste can be applied to produce heat or electricity. This strategy allows for the reduction of energy produced using coal or other fuels and directly reduces greenhouse gas emissions. Among different factories, leather manufacturing plays a very important role in the whole world from the socio-economic point of view. The leather industry plays a very important role in our society from a socio-economic point of view. Even though the leather industry uses a by-product from the meat industry as raw material, it is considered as an activity demanding integrated prevention and control of pollution. Along the entire process from raw skins/hides to finished leather, a huge amount of solid and water waste is generated. Solid wastes include fleshings, raw trimmings, shavings, buffing dust, etc. One of the most abundant solid wastes generated throughout leather tanning is shaving waste. Leather shaving is a mechanical process that aims at reducing the tanned skin to a specific thickness before tanning and finishing. This product consists mainly of collagen and tanning agent. At present, most of the world's leather processing is chrome-tanned based. Consequently, large amounts of chromium-containing shaving wastes need to be treated. The major concern about the management of this kind of solid waste is ascribed to chrome content, which makes the conventional disposal methods, such as landfilling and incineration, not practicable. Therefore, many efforts have been developed in recent decades to promote eco-friendly/alternative leather production and more effective waste management. Herein, shaving waste resulting from metal-free tanning technology is proposed as low-cost precursors for the preparation of carbon material as anodes for lithium-ion batteries (LIBs). In line with the philosophy of a reduced environmental impact, for preparing fully sustainable and environmentally friendly LIBs anodes, deionized water and carboxymethyl cellulose (CMC) have been used as alternatives to toxic/teratogen N-methyl-2- pyrrolidone (NMP) and to biologically hazardous Polyvinylidene fluoride (PVdF), respectively. Furthermore, going towards the reduced cost, we employed water solvent and fluoride-free bio-derived CMC binder (as an alternative to NMP and PVdF, respectively) together with LiFePO₄ (LFP) when a full cell was considered. These actions make closer to the 2030 goal of having green LIBs at 100 $ kW h⁻¹. Besides, the preparation of the water-based electrodes does not need a controlled environment and due to the higher vapour pressure of water in comparison with NMP, the water-based electrode drying is much faster. This aspect determines an important consequence, namely a reduced energy consumption for the electrode preparation. The electrode derived from leather waste demonstrated a discharge capacity of 735 mAh g⁻¹ after 1000 charge and discharge cycles at 0.5 A g⁻¹. This promising performance is ascribed to the synergistic effect of defects, interlayer spacing, heteroatoms-doped (N, O, and S), high specific surface area, and hierarchical micro/mesopore structure of the biochar. Interestingly, these features of activated biochars derived from the leather industry open the way for possible applications in other EESDs as well.Keywords: biowaste, lithium-ion batteries, physical activation, waste management, leather industry
Procedia PDF Downloads 17029 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture
Authors: Zakia Hbellaq
Abstract:
The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants
Procedia PDF Downloads 16028 Tensile and Direct Shear Responses of Basalt-Fibre Reinforced Composite Using Alkali Activate Binder
Authors: S. Candamano, A. Iorfida, L. Pagnotta, F. Crea
Abstract:
Basalt fabric reinforced cementitious composites (FRCM) have attracted great attention because they result in being effective in structural strengthening and eco-efficient. In this study, authors investigate their mechanical behavior when an alkali-activated binder, with tuned properties and containing high amounts of industrial by-products, such as ground granulated blast furnace slag, is used. Reinforcement is made up of a balanced, coated bidirectional fabric made out of basalt fibres and stainless steel micro-wire, with a mesh size of 8x8 mm and an equivalent design thickness equal to 0.064 mm. Mortars mixes have been prepared by maintaining constant the water/(reactive powders) and sand/(reactive powders) ratios at 0.53 and 2.7 respectively. Tensile tests were carried out on composite specimens of nominal dimensions equal to 500 mm x 50 mm x 10 mm, with 6 embedded rovings in the loading direction. Direct shear tests (DST), aimed to the stress-transfer mechanism and failure modes of basalt-FRCM composites, were carried out on brickwork substrate using an externally bonded basalt-FRCM composite strip 10 mm thick, 50 mm wide and a bonded length of 300 mm. Mortars exhibit, after 28 days of curing, a compressive strength of 32 MPa and a flexural strength of 5.5 MPa. Main hydration product is a poorly crystalline CASH gel. The constitutive behavior of the composite has been identified by means of direct tensile tests, with response curves showing a tri-linear behavior. The first linear phase represents the uncracked (I) stage, the second (II) is identified by crack development and the third (III) corresponds to cracked stage, completely developed up to failure. All specimens exhibit a crack pattern throughout the gauge length and failure occurred as a result of sequential tensile failure of the fibre bundles, after reaching the ultimate tensile strength. The behavior is mainly governed by cracks development (II) and widening (III) up to failure. The main average values related to the stages are σI= 173 MPa and εI= 0.026% that are the stress and strain of the transition point between stages I and II, corresponding to the first mortar cracking; σu = 456 MPa and εu= 2.20% that are the ultimate tensile strength and strain, respectively. The tensile modulus of elasticity in stage III is EIII= 41 GPa. All single-lap shear test specimens failed due to composite debonding. It occurred at the internal fabric-to-matrix interface, and it was the result of fracture of the matrix between the fibre bundles. For all specimens, transversal cracks were visible on the external surface of the composite and involved only the external matrix layer. This cracking appears when the interfacial shear stresses increase and slippage of the fabric at the internal matrix layer interface occurs. Since the external matrix layer is bonded to the reinforcement fabric, it translates with the slipped fabric. Average peak load around 945 N, peak stress around 308 MPa, and global slip around 6 mm were measured. The preliminary test results allow affirming that Alkali Activated Binders can be considered a potentially valid alternative to traditional mortars in designing FRCM composites.Keywords: alkali activated binders, basalt-FRCM composites, direct shear tests, structural strengthening
Procedia PDF Downloads 12327 Tensile and Bond Characterization of Basalt-Fabric Reinforced Alkali Activated Matrix
Authors: S. Candamano, A. Iorfida, F. Crea, A. Macario
Abstract:
Recently, basalt fabric reinforced cementitious composites (FRCM) have attracted great attention because they result to be effective in structural strengthening and cost/environment efficient. In this study, authors investigate their mechanical behavior when an inorganic matrix, belonging to the family of alkali-activated binders, is used. In particular, the matrix has been designed to contain high amounts of industrial by-products and waste, such as Ground Granulated Blast Furnace Slag (GGBFS) and Fly Ash. Fresh state properties, such as workability, mechanical properties and shrinkage behavior of the matrix have been measured, while microstructures and reaction products were analyzed by Scanning Electron Microscopy and X-Ray Diffractometry. Reinforcement is made up of a balanced, coated bidirectional fabric made out of basalt fibres and stainless steel micro-wire, with a mesh size of 8x8 mm and an equivalent design thickness equal to 0.064 mm. Mortars mixes have been prepared by maintaining constant the water/(reactive powders) and sand/(reactive powders) ratios at 0.53 and 2.7 respectively. An appropriate experimental campaign based on direct tensile tests on composite specimens and single-lap shear bond test on brickwork substrate has been thus carried out to investigate their mechanical behavior under tension, the stress-transfer mechanism and failure modes. Tensile tests were carried out on composite specimens of nominal dimensions equal to 500 mm x 50 mm x 10 mm, with 6 embedded rovings in the loading direction. Direct shear tests (DST) were carried out on brickwork substrate using an externally bonded basalt-FRCM composite strip 10 mm thick, 50 mm wide and a bonded length of 300 mm. Mortars exhibit, after 28 days of curing, an average compressive strength of 32 MPa and flexural strength of 5.5 MPa. Main hydration product is a poorly crystalline aluminium-modified calcium silicate hydrate (C-A-S-H) gel. The constitutive behavior of the composite has been identified by means of direct tensile tests, with response curves showing a tri-linear behavior. Test results indicate that the behavior is mainly governed by cracks development (II) and widening (III) up to failure. The ultimate tensile strength and strain were respectively σᵤ = 456 MPa and ɛᵤ= 2.20%. The tensile modulus of elasticity in stage III was EIII= 41 GPa. All single-lap shear test specimens failed due to composite debonding. It occurred at the internal fabric-to-matrix interface, and it was the result of a fracture of the matrix between the fibre bundles. For all specimens, transversal cracks were visible on the external surface of the composite and involved only the external matrix layer. This cracking appears when the interfacial shear stresses increase and slippage of the fabric at the internal matrix layer interface occurs. Since the external matrix layer is bonded to the reinforcement fabric, it translates with the slipped fabric. Average peak load around 945 N, peak stress around 308 MPa and global slip around 6 mm were measured. The preliminary test results allow affirming that Alkali-Activated Materials can be considered a potentially valid alternative to traditional mortars in designing FRCM composites.Keywords: Alkali-activated binders, Basalt-FRCM composites, direct shear tests, structural strengthening
Procedia PDF Downloads 12926 Design Challenges for Severely Skewed Steel Bridges
Authors: Muna Mitchell, Akshay Parchure, Krishna Singaraju
Abstract:
There is an increasing need for medium- to long-span steel bridges with complex geometry due to site restrictions in developed areas. One of the solutions to grade separations in congested areas is to use longer spans on skewed supports that avoid at-grade obstructions limiting impacts to the foundation. Where vertical clearances are also a constraint, continuous steel girders can be used to reduce superstructure depths. Combining continuous long steel spans on severe skews can resolve the constraints at a cost. The behavior of skewed girders is challenging to analyze and design with subsequent complexity during fabrication and construction. As a part of a corridor improvement project, Walter P Moore designed two 1700-foot side-by-side bridges carrying four lanes of traffic in each direction over a railroad track. The bridges consist of prestressed concrete girder approach spans and three-span continuous steel plate girder units. The roadway design added complex geometry to the bridge with horizontal and vertical curves combined with superelevation transitions within the plate girder units. The substructure at the steel units was skewed approximately 56 degrees to satisfy the existing railroad right-of-way requirements. A horizontal point of curvature (PC) near the end of the steel units required the use flared girders and chorded slab edges. Due to the flared girder geometry, the cross-frame spacing in each bay is unique. Staggered cross frames were provided based on AASHTO LRFD and NCHRP guidelines for high skew steel bridges. Skewed steel bridges develop significant forces in the cross frames and rotation in the girder websdue to differential displacements along the girders under dead and live loads. In addition, under thermal loads, skewed steel bridges expand and contract not along the alignment parallel to the girders but along the diagonal connecting the acute corners, resulting in horizontal displacement both along and perpendicular to the girders. AASHTO LRFD recommends a 95 degree Fahrenheit temperature differential for the design of joints and bearings. The live load and the thermal loads resulted in significant horizontal forces and rotations in the bearings that necessitated the use of HLMR bearings. A unique bearing layout was selected to minimize the effect of thermal forces. The span length, width, skew, and roadway geometry at the bridges also required modular bridge joint systems (MBJS) with inverted-T bent caps to accommodate movement in the steel units. 2D and 3D finite element analysis models were developed to accurately determine the forces and rotations in the girders, cross frames, and bearings and to estimate thermal displacements at the joints. This paper covers the decision-making process for developing the framing plan, bearing configurations, joint type, and analysis models involved in the design of the high-skew three-span continuous steel plate girder bridges.Keywords: complex geometry, continuous steel plate girders, finite element structural analysis, high skew, HLMR bearings, modular joint
Procedia PDF Downloads 19325 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 24024 Design and Implementation of an Affordable Electronic Medical Records in a Rural Healthcare Setting: A Qualitative Intrinsic Phenomenon Case Study
Authors: Nitika Sharma, Yogesh Jain
Abstract:
Introduction: An efficient Information System helps in improving the service delivery as well provides the foundation for policy and regulation of other building blocks of Health System. Health care organizations require an integrated working of its various sub-systems. An efficient EMR software boosts the teamwork amongst the various sub-systems thereby resulting in improved service delivery. Although there has been a huge impetus to EMR under the Digital India initiative, it has still not been mandated in India. It is generally implemented in huge funded public or private healthcare organizations only. Objective: The study was conducted to understand the factors that lead to the successful adoption of an affordable EMR in the low level healthcare organization. It intended to understand the design of the EMR and address the solutions to the challenges faced in adoption of the EMR. Methodology: The study was conducted in a non-profit registered Healthcare organization that has been providing healthcare facilities to more than 2500 villages including certain areas that are difficult to access. The data was collected with help of field notes, in-depth interviews and participant observation. A total of 16 participants using the EMR from different departments were enrolled via purposive sampling technique. The participants included in the study were working in the organization before the implementation of the EMR system. The study was conducted in one month period from 25 June-20 July 2018. The Ethical approval was taken from the institute along with prior approval of the participants. Data analysis: A word document of more than 4000 words was obtained after transcribing and translating the answers of respondents. It was further analyzed by focused coding, a line by line review of the transcripts, underlining words, phrases or sentences that might suggest themes to do thematic narrative analysis. Results: Based on the answers the results were thematically grouped under four headings: 1. governance of organization, 2. architecture and design of the software, 3. features of the software, 4. challenges faced in adoption and the solutions to address them. It was inferred that the successful implementation was attributed to the easy and comprehensive design of the system which has facilitated not only easy data storage and retrieval but contributes in constructing a decision support system for the staff. Portability has lead to increased acceptance by physicians. The proper division of labor, increased efficiency of staff, incorporation of auto-correction features and facilitation of task shifting has lead to increased acceptance amongst the users of various departments. Geographical inhibitions, low computer literacy and high patient load were the major challenges faced during its implementation. Despite of dual efforts made both by the architects and administrators to combat these challenges, there are still certain ongoing challenges faced by organization. Conclusion: Whenever any new technology is adopted there are certain innovators, early adopters, late adopters and laggards. The same pattern was followed in adoption of this software. He challenges were overcome with joint efforts of organization administrators and users as well. Thereby this case study provides a framework of implementing similar systems in public sector of countries that are struggling for digitizing the healthcare in presence of crunch of human and financial resources.Keywords: EMR, healthcare technology, e-health, EHR
Procedia PDF Downloads 105