Search results for: load disaggregation
170 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 158169 Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers
Authors: Eoin Kirwan, Christopher Nulty, Declan Browne
Abstract:
The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.Keywords: Conjested fixtures, fatigue monitoring, ice hockey, readiness
Procedia PDF Downloads 144168 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 235167 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine
Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland
Abstract:
The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.Keywords: force-velocity, leg-press, power-velocity, profiling, reliability
Procedia PDF Downloads 60166 Diagnostic Delays and Treatment Dilemmas: A Case of Drug-Resistant HIV and Tuberculosis
Authors: Christi Jackson, Chuka Onaga
Abstract:
Introduction: We report a case of delayed diagnosis of extra-pulmonary INH-mono-resistant Tuberculosis (TB) in a South African patient with drug-resistant HIV. Case Presentation: A 36-year old male was initiated on 1st line (NNRTI-based) anti-retroviral therapy (ART) in September 2009 and switched to 2nd line (PI-based) ART in 2011, according to local guidelines. He was following up at the outpatient wellness unit of a public hospital, where he was diagnosed with Protease Inhibitor resistant HIV in March 2016. He had an HIV viral load (HIVVL) of 737000 copies/mL, CD4-count of 10 cells/µL and presented with complaints of productive cough, weight loss, chronic diarrhoea and a septic buttock wound. Several investigations were done on sputum, stool and pus samples but all were negative for TB. The patient was treated with antibiotics and the cough and the buttock wound improved. He was subsequently started on a 3rd-line ART regimen of Darunavir, Ritonavir, Etravirine, Raltegravir, Tenofovir and Emtricitabine in May 2016. He continued losing weight, became too weak to stand unsupported and started complaining of abdominal pain. Further investigations were done in September 2016, including a urine specimen for Line Probe Assay (LPA), which showed M. tuberculosis sensitive to Rifampicin but resistant to INH. A lymph node biopsy also showed histological confirmation of TB. Management and outcome: He was started on Rifabutin, Pyrazinamide and Ethambutol in September 2016, and Etravirine was discontinued. After 6 months on ART and 2 months on TB treatment, his HIVVL had dropped to 286 copies/mL, CD4 improved to 179 cells/µL and he showed clinical improvement. Pharmacy supply of his individualised drugs was unreliable and presented some challenges to continuity of treatment. He successfully completed his treatment in June 2017 while still maintaining virological suppression. Discussion: Several laboratory-related factors delayed the diagnosis of TB, including the unavailability of urine-lipoarabinomannan (LAM) and urine-GeneXpert (GXP) tests at this facility. Once the diagnosis was made, it presented a treatment dilemma due to the expected drug-drug interactions between his 3rd-line ART regimen and his INH-resistant TB regimen, and specialist input was required. Conclusion: TB is more difficult to diagnose in patients with severe immunosuppression, therefore additional tests like urine-LAM and urine-GXP can be helpful in expediting the diagnosis in these cases. Patients with non-standard drug regimens should always be discussed with a specialist in order to avoid potentially harmful drug-drug interactions.Keywords: drug-resistance, HIV, line probe assay, tuberculosis
Procedia PDF Downloads 173165 Experimental Analysis of the Performance of a System for Freezing Fish Products Equipped with a Modulating Vapour Injection Scroll Compressor
Authors: Domenico Panno, Antonino D’amico, Hamed Jafargholi
Abstract:
This paper presents an experimental analysis of the performance of a system for freezing fish products equipped with a modulating vapour injection scroll compressor operating with R448A refrigerant. Freezing is a critical process for the preservation of seafood products, as it influences quality, food safety, and environmental sustainability. The use of a modulating scroll compressor with vapour injection, associated with the R448A refrigerant, is proposed as a solution to optimize the performance of the system, reducing energy consumption and mitigating the environmental impact. The stream injection modulating scroll compressor represents an advanced technology that allows you to adjust the compressor capacity based on the actual cooling needs of the system. Vapour injection allows the optimization of the refrigeration cycle, reducing the evaporation temperature and improving the overall efficiency of the system. The use of R448A refrigerant, with a low Global Warming Potential (GWP), is part of an environmental sustainability perspective, helping to reduce the climate impact of the system. The aim of this research was to evaluate the performance of the system through a series of experiments conducted on a pilot plant for the freezing of fish products. Several operational variables were monitored and recorded, including evaporation temperature, condensation temperature, energy consumption, and freezing time of seafood products. The results of the experimental analysis highlighted the benefits deriving from the use of the modulating vapour injection scroll compressor with the R448A refrigerant. In particular, a significant reduction in energy consumption was recorded compared to conventional systems. The modulating capacity of the compressor made it possible to adapt the cold production to variations in the thermal load, ensuring optimal operation of the system and reducing energy waste. Furthermore, the use of an electronic expansion valve highlighted greater precision in the control of the evaporation temperature, with minimal deviation from the desired set point. This helped ensure better quality of the final product, reducing the risk of damage due to temperature changes and ensuring uniform freezing of the fish products. The freezing time of seafood has been significantly reduced thanks to the configuration of the entire system, allowing for faster production and greater production capacity of the plant. In conclusion, the use of a modulating vapour injection scroll compressor operating with R448A has proven effective in improving the performance of a system for freezing fish products. This technology offers an optimal balance between energy efficiency, temperature control, and environmental sustainability, making it an advantageous choice for food industries.Keywords: scroll compressor, vapor injection, refrigeration system, EER
Procedia PDF Downloads 48164 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers
Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal
Abstract:
Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test
Procedia PDF Downloads 99163 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single
Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa
Abstract:
Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600 minimum load impedance of the DAQ card with the 5 to 20 low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.Keywords: flux density, electrical steel, LabVIEW, magnetization
Procedia PDF Downloads 291162 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 257161 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 104160 Mechanical Properties of Carbon Fibre Reinforced Thermoplastic Composites Consisting of Recycled Carbon Fibres and Polyamide 6 Fibres
Authors: Mir Mohammad Badrul Hasan, Anwar Abdkader, Chokri Cherif
Abstract:
With the increasing demand and use of carbon fibre reinforced composites (CFRC), disposal of the carbon fibres (CF) and end of life composite parts is gaining tremendous importance on the issue especially of sustainability. Furthermore, a number of processes (e. g. pyrolysis, solvolysis, etc.) are available currently to obtain recycled CF (rCF) from end-of-life CFRC. Since the CF waste or rCF are neither allowed to be thermally degraded nor landfilled (EU Directive 1999/31/EC), profitable recycling and re-use concepts are urgently necessary. Currently, the market for materials based on rCF mainly consists of random mats (nonwoven) made from short fibres. The strengths of composites that can be achieved from injection-molded components and from nonwovens are between 200-404 MPa and are characterized by low performance and suitable for non-structural applications such as in aircraft and vehicle interiors. On the contrary, spinning rCF to yarn constructions offers good potential for higher CFRC material properties due to high fibre orientation and compaction of rCF. However, no investigation is reported till yet on the direct comparison of the mechanical properties of thermoplastic CFRC manufactured from virgin CF filament yarn and spun yarns from staple rCF. There is a lack of understanding on the level of performance of the composites that can be achieved from hybrid yarns consisting of rCF and PA6 fibres. In this drop back, extensive research works are being carried out at the Textile Machinery and High-Performance Material Technology (ITM) on the development of new thermoplastic CFRC from hybrid yarns consisting of rCF. For this purpose, a process chain is developed at the ITM starting from fibre preparation to hybrid yarns manufacturing consisting of staple rCF by mixing with thermoplastic fibres. The objective is to apply such hybrid yarns for the manufacturing of load bearing textile reinforced thermoplastic CFRCs. In this paper, the development of innovative multi-component core-sheath hybrid yarn structures consisting of staple rCF and polyamide 6 (PA 6) on a DREF-3000 friction spinning machine is reported. Furthermore, Unidirectional (UD) CFRCs are manufactured from the developed hybrid yarns, and the mechanical properties of the composites such as tensile and flexural properties are analyzed. The results show that the UD composite manufactured from the developed hybrid yarns consisting of staple rCF possesses approximately 80% of the tensile strength and E-module to those produced from virgin CF filament yarn. The results show a huge potential of the DREF-3000 friction spinning process to develop composites from rCF for high-performance applications.Keywords: recycled carbon fibres, hybrid yarn, friction spinning, thermoplastic composite
Procedia PDF Downloads 255159 Characterization of Himalayan Phyllite with Reference to Foliation Planes
Authors: Divyanshoo Singh, Hemant Kumar Singh, Kumar Nilankar
Abstract:
Major engineering constructions and foundations (e.g., dams, tunnels, bridges, underground caverns, etc.) in and around the Himalayan region of Uttarakhand are not only confined within hard and crystalline rocks but also stretched within weak and anisotropic rocks. While constructing within such anisotropic rocks, engineers more often encounter geotechnical complications such as structural instability, slope failure, and excessive deformation. These severities/complexities arise mainly due to inherent anisotropy such as layering/foliations, preferred mineral orientations, and geo-mechanical anisotropy present within rocks and vary when measured in different directions. Of all the inherent anisotropy present within the rocks, major geotechnical complexities mainly arise due to the inappropriate orientation of weak planes (bedding/foliation). Thus, Orientations of such weak planes highly affect the fracture patterns, failure mechanism, and strength of rocks. This has led to an improved understanding of the physico-mechanical behavior of anisotropic rocks with different orientations of weak planes. Therefore, in this study, block samples of phyllite belonging to the Chandpur Group of Lesser Himalaya were collected from the Srinagar area of Uttarakhand, India, to investigate the effect of foliation angles on physico-mechanical properties of the rock. Further, collected block samples were core drilled of diameter 50 mm at different foliation angles, β (angle between foliation plane and drilling direction), i.e., 0⁰, 30⁰, 60⁰, and 90⁰, respectively. Before the test, drilled core samples were oven-dried at 110⁰C to achieve uniformity. Physical and mechanical properties such as Seismic wave velocity, density, uniaxial compressive strength (UCS), point load strength (PLS), and Brazilian tensile strength (BTS) test were carried out on prepared core specimens. The results indicate that seismic wave velocities (P-wave and S-wave) decrease with increasing β angle. As the β angle increases, the number of foliation planes that the wave needs to pass through increases and thus causes the dissipation of wave energy with increasing β. Maximum strength for UCS, PLS, and BTS was found to be at β angle of 90⁰. However, minimum strength for UCS and BTS was found to be at β angle of 30⁰, which differs from PLS, where minimum strength was found at 0⁰ β angle. Furthermore, failure modes also correspond to the strength of the rock, showing along foliation and non-central failure as characteristics of low strength values, while multiple fractures and central failure as characteristics of high strength values. Thus, this study will provide a better understanding of the anisotropic features of phyllite for the purpose of major engineering construction and foundations within the Himalayan Region.Keywords: anisotropic rocks, foliation angle, Physico-mechanical properties, phyllite, Himalayan region
Procedia PDF Downloads 59158 DIF-JACKET: a Thermal Protective Jacket for Firefighters
Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves
Abstract:
Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.Keywords: firefighters, multilayer system, phase change material, thermal protective clothing
Procedia PDF Downloads 166157 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27
Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer
Abstract:
At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis
Procedia PDF Downloads 187156 Redesigning Clinical and Nursing Informatics Capstones
Authors: Sue S. Feldman
Abstract:
As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.Keywords: capstones, clinical informatics, health informatics, informatics
Procedia PDF Downloads 133155 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 296154 Possibility of Membrane Filtration to Treatment of Effluent from Digestate
Authors: Marcin Debowski, Marcin Zielinski, Magdalena Zielinska, Paulina Rusanowska
Abstract:
The problem with digestate management is one of the most important factors influencing on the development and operation of biogas plant. Turbidity and bacterial contamination negatively affect the growth of algae, which can limit the use of the effluent in the production of algae biomass on a large scale. These problems can be overcome by cultivating of algae species resistant to environmental factors, such as Chlorella sp., Scenedesmus sp., or reducing load of organic compounds to prevent bacterial contamination. The effluent requires dilution and/or purification. One of the methods of effluent treatment is the use of a membrane technology such as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO), depending on the membrane pore size and the cut off point. Membranes are a physical barrier to solids and particles larger than the size of the pores. MF membranes have the largest pores and are used to remove turbidity, suspensions, bacteria and some viruses. UF membranes remove also color, odor and organic compounds with high molecular weight. In treatment of wastewater or other waste streams, MF and UF can provide a sufficient degree of purification. NF membranes are used to remove natural organic matter from waters, water disinfection products and sulfates. RO membranes are applied to remove monovalent ions such as Na⁺ or K⁺. The effluent was used in UF for medium to cultivation of two microalgae: Chlorella sp. and Phaeodactylum tricornutum. Growth rates of Chlorella sp. and P. tricornutum were similar: 0.216 d⁻¹ and 0.200 d⁻¹ (Chlorella sp.); 0.128 d⁻¹ and 0.126 d⁻¹ (P. tricornutum), on synthetic medium and permeate from UF, respectively. The final biomass composition was also similar, regardless of the medium. Removal of nitrogen was 92% and 71% by Chlorella sp. and P. tricornutum, respectively. The fermentation effluents after UF and dilution were also used for cultivation of algae Scenedesmus sp. that is resistant to environmental conditions. The authors recommended the development of biorafinery based on the production of algae for the biogas production. There are examples of using a multi-stage membrane system to purify the liquid fraction from digestate. After the initial UF, RO is used to remove ammonium nitrogen and COD. To obtain a permeate with a concentration of ammonium nitrogen allowing to discharge it into the environment, it was necessary to apply three-stage RO. The composition of the permeate after two-stage RO was: COD 50–60 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 300–320 mg/dm³, total nitrogen 320–340 mg/dm³, total phosphorus 53 mg/dm³. However compostion of permeate after three-stage RO was: COD < 5 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 0 mg/dm³, total nitrogen 3.5 mg/dm³, total phosphorus < 0,05 mg/dm³. Last stage of RO might be replaced by ion exchange process. The negative aspect of membrane filtration systems is the fact that the permeate is about 50% of the introduced volume, the remainder is the retentate. The management of a retentate might involve recirculation to a biogas plant.Keywords: digestate, membrane filtration, microalgae cultivation, Chlorella sp.
Procedia PDF Downloads 353153 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates
Authors: Jiro Takagi
Abstract:
Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses
Procedia PDF Downloads 226152 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 114151 Degradation of the Cu-DOM Complex by Bacteria: A Way to Increase Phytoextraction of Copper in a Vineyard Soil
Authors: Justine Garraud, Hervé Capiaux, Cécile Le Guern, Pierre Gaudin, Clémentine Lapie, Samuel Chaffron, Erwan Delage, Thierry Lebeau
Abstract:
The repeated use of Bordeaux mixture (copper sulphate) and other chemical forms of copper (Cu) has led to its accumulation in wine-growing soils for more than a century, to the point of modifying the ecosystem of these soils. Phytoextraction of copper could progressively reduce the Cu load in these soils, and even to recycle copper (e.g. as a micronutrient in animal nutrition) by cultivating the extracting plants in the inter-row of the vineyards. Soil cleaning up usually requires several years because the chemical speciation of Cu in solution is mainly based on forms complexed with dissolved organic matter (DOM) that are not phytoavailable, unlike the "free" forms (Cu2+). Indeed, more than 98% of Cu in the solution is bound to DOM. The selection and inoculation of invineyardsoils in vineyard soils ofbacteria(bioaugmentation) able to degrade Cu-DOM complexes could increase the phytoavailable pool of Cu2+ in the soil solution (in addition to bacteria which first mobilize Cu in solution from the soil bearing phases) in order to increase phytoextraction performance. In this study, sevenCu-accumulating plants potentially usable in inter-row were tested for their Cu phytoextraction capacity in hydroponics (ray-grass, brown mustard, buckwheat, hemp, sunflower, oats, and chicory). Also, a bacterial consortium was tested: Pseudomonas sp. previously studied for its ability to mobilize Cu through the pyoverdine siderophore (complexing agent) and potentially to degrade Cu-DOM complexes, and a second bacterium (to be selected) able to promote the survival of Pseudomonas sp. following its inoculation in soil. Interaction network method was used based on the notions of co-occurrence and, therefore, of bacterial abundance found in the same soils. Bacteria from the EcoVitiSol project (Alsace, France) were targeted. The final step consisted of incoupling the bacterial consortium with the chosen plant in soil pots. The degradation of Cu-DOMcomplexes is measured on the basis of the absorption index at 254nm, which gives insight on the aromaticity of the DOM. The“free” Cu in solution (from the mobilization of Cu and/or the degradation of Cu-MOD complexes) is assessed by measuring pCu. Eventually, Cu accumulation in plants is measured by ICP-AES. The selection of the plant is currently being finalized. The interaction network method targeted the best positive interactions ofFlavobacterium sp. with Pseudomonassp. These bacteria are both PGPR (plant growth promoting rhizobacteria) with the ability to improve the plant growth and to mobilize Cu from the soil bearing phases (siderophores). Also, these bacteria are known to degrade phenolic groups, which are highly present in DOM. They could therefore contribute to the degradation of DOM-Cu. The results of the upcoming bacteria-plant coupling tests in pots will be also presented.Keywords: complexes Cu-DOM, bioaugmentation, phytoavailability, phytoextraction
Procedia PDF Downloads 83150 Design Development and Qualification of a Magnetically Levitated Blower for C0₂ Scrubbing in Manned Space Missions
Authors: Larry Hawkins, Scott K. Sakakura, Michael J. Salopek
Abstract:
The Marshall Space Flight Center is designing and building a next-generation CO₂ removal system, the Four Bed Carbon Dioxide Scrubber (4BCO₂), which will use the International Space Station (ISS) as a testbed. The current ISS CO2 removal system has faced many challenges in both performance and reliability. Given that CO2 removal is an integral Environmental Control and Life Support System (ECLSS) subsystem, the 4BCO2 Scrubber has been designed to eliminate the shortfalls identified in the current ISS system. One of the key required upgrades was to improve the performance and reliability of the blower that provides the airflow through the CO₂ sorbent beds. A magnetically levitated blower, capable of higher airflow and pressure than the previous system, was developed to meet this need. The design and qualification testing of this next-generation blower are described here. The new blower features a high-efficiency permanent magnet motor, a five-axis, active magnetic bearing system, and a compact controller containing both a variable speed drive and a magnetic bearing controller. The blower uses a centrifugal impeller to pull air from the inlet port and drive it through an annular space around the motor and magnetic bearing components to the exhaust port. Technical challenges of the blower and controller development include survival of the blower system under launch random vibration loads, operation in microgravity, packaging under strict size and weight requirements, and successful operation during 4BCO₂ operational changeovers. An ANSYS structural dynamic model of the controller was used to predict response to the NASA defined random vibration spectrum and drive minor design changes. The simulation results are compared to measurements from qualification testing the controller on a vibration table. Predicted blower performance is compared to flow loop testing measurements. Dynamic response of the system to valve changeovers is presented and discussed using high bandwidth measurements from dynamic pressure probes, magnetic bearing position sensors, and actuator coil currents. The results presented in the paper show that the blower controller will survive launch vibration levels, the blower flow meets the requirements, and the magnetic bearings have adequate load capacity and control bandwidth to maintain the desired rotor position during the valve changeover transients.Keywords: blower, carbon dioxide removal, environmental control and life support system, magnetic bearing, permanent magnet motor, validation testing, vibration
Procedia PDF Downloads 136149 Distribution, Source Apportionment and Assessment of Pollution Level of Trace Metals in Water and Sediment of a Riverine Wetland of the Brahmaputra Valley
Authors: Kali Prasad Sarma, Sanghita Dutta
Abstract:
Deepor Beel (DB), the lone Ramsar site and an important wetland of the Brahmaputra valley in the state of Assam. The local people from fourteen peripheral villages traditionally utilize the wetland for harvesting vegetables, flowers, aquatic seeds, medicinal plants, fish, molluscs, fodder for domestic cattle etc. Therefore, it is of great importance to understand the concentration and distribution of trace metals in water-sediment system of the beel in order to protect its ecological environment. DB lies between26°05′26′′N to 26°09′26′′N latitudes and 90°36′39′′E to 91°41′25′′E longitudes. Water samples from the surface layer of water up to 40cm deep and sediment samples from the top 5cm layer of surface sediments were collected. The trace metals in waters and sediments were analysed using ICP-OES. The organic Carbon was analysed using the TOC analyser. The different mineral present in the sediments were confirmed by X-ray diffraction method (XRD). SEM images were recorded for the samples using SEM, attached with energy dispersive X-ray unit, with an accelerating voltage of 20 kv. All the statistical analyses were performed using SPSS20.0 for windows. In the present research, distribution, source apportionment, temporal and spatial variability, extent of pollution and the ecological risk of eight toxic trace metals in sediments and water of DB were investigated. The average concentrations of chromium(Cr) (both the seasons), copper(Cu) and lead(Pb) (pre-monsoon) and zinc(Zn) and cadmium(Cd) (post-monsoon) in sediments were higher than the consensus based threshold concentration(TEC). The persistent exposure of toxic trace metals in sediments pose a potential threat, especially to sediment dwelling organisms. The degree of pollution in DB sediments for Pb, Cobalt (Co) Zn, Cd, Cr, Cu and arsenic (As) was assessed using Enrichment Factor (EF), Geo-accumulation index (Igeo) and Pollution Load Index (PLI). The results indicated that contamination of surface sediments in DB is dominated by Pb and Cd and to a lesser extent by Co, Fe, Cu, Cr, As and Zn. A significant positive correlation among the pairs of element Co/Fe, Zn/As in water, and Cr/Zn, Fe/As in sediments indicates similar source of origin of these metals. The effects of interaction among trace metals between water and sediments shows significant variations (F =94.02, P < 0.001), suggesting maximum mobility of trace metals in DB sediments and water. The source apportionment of the heavy metals was carried out using Principal Component Analysis (PCA). SEM-EDS detects the presence of Cd, Cu, Cr, Zn, Pb, As and Fe in the sediment sample. The average concentration of Cd, Zn, Pb and As in the bed sediments of DB are found to be higher than the crustal abundance. The EF values indicate that Cd and Pb are significantly enriched. From source apportionment studies of the eight metals using PCA revealed that Cd was anthropogenic in origin; Pb, As, Cr, and Zn had mixed sources; whereas Co, Cu and Fe were natural in origin.Keywords: Deepor Beel, enrichment factor, principal component analysis, trace metals
Procedia PDF Downloads 290148 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 101147 Comparison of Microstructure, Mechanical Properties and Residual Stresses in Laser and Electron Beam Welded Ti–5Al–2.5Sn Titanium Alloy
Authors: M. N. Baig, F. N. Khan, M. Junaid
Abstract:
Titanium alloys are widely employed in aerospace, medical, chemical, and marine applications. These alloys offer many advantages such as low specific weight, high strength to weight ratio, excellent corrosion resistance, high melting point and good fatigue behavior. These attractive properties make titanium alloys very unique and therefore they require special attention in all areas of processing, especially welding. In this work, 1.6 mm thick sheets of Ti-5Al-2,5Sn, an alpha titanium (α-Ti) alloy, were welded using electron beam (EBW) and laser beam (LBW) welding processes to achieve a full penetration Bead-on Plate (BoP) configuration. The weldments were studied using polarized optical microscope, SEM, EDS and XRD. Microhardness distribution across the weld zone and smooth and notch tensile strengths of the weldments were also recorded. Residual stresses using Hole-drill Strain Measurement (HDSM) method and deformation patterns of the weldments were measured for the purpose of comparison of the two welding processes. Fusion zone widths of both EBW and LBW weldments were found to be approximately equivalent owing to fairly similar high power densities of both the processes. Relatively less oxide content and consequently high joint quality were achieved in EBW weldment as compared to LBW due to vacuum environment and absence of any shielding gas. However, an increase in heat-affected zone width and partial ά-martensitic transformation infusion zone of EBW weldment were observed because of lesser cooling rates associated with EBW as compared with LBW. The microstructure infusion zone of EBW weldment comprised both acicular α and ά martensite within the prior β grains whereas complete ά martensitic transformation was observed within the fusion zone of LBW weldment. Hardness of the fusion zone in EBW weldment was found to be lower than the fusion zone of LBW weldment due to the observed microstructural differences. Notch tensile specimen of LBW exhibited higher load capacity, ductility, and absorbed energy as compared with EBW specimen due to the presence of high strength ά martensitic phase. It was observed that the sheet deformation and deformation angle in EBW weldment were more than LBW weldment due to relatively more heat retention in EBW which led to more thermal strains and hence higher deformations and deformation angle. The lowest residual stresses were found in LBW weldments which were tensile in nature. This was owing to high power density and higher cooling rates associated with LBW process. EBW weldment exhibited highest compressive residual stresses due to which the service life of EBW weldment is expected to improve.Keywords: Laser and electron beam welding, Microstructure and mechanical properties, Residual stress and distortions, Titanium alloys
Procedia PDF Downloads 229146 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 82145 Biosensor: An Approach towards Sustainable Environment
Authors: Purnima Dhall, Rita Kumar
Abstract:
Introduction: River Yamuna, in the national capital territory (NCT), and also the primary source of drinking water for the city. Delhi discharges about 3,684 MLD of sewage through its 18 drains in to the Yamuna. Water quality monitoring is an important aspect of water management concerning to the pollution control. Public concern and legislation are now a day’s demanding better environmental control. Conventional method for estimating BOD5 has various drawbacks as they are expensive, time-consuming, and require the use of highly trained personnel. Stringent forthcoming regulations on the wastewater have necessitated the urge to develop analytical system, which contribute to greater process efficiency. Biosensors offer the possibility of real time analysis. Methodology: In the present study, a novel rapid method for the determination of biochemical oxygen demand (BOD) has been developed. Using the developed method, the BOD of a sample can be determined within 2 hours as compared to 3-5 days with the standard BOD3-5day assay. Moreover, the test is based on specified consortia instead of undefined seeding material therefore it minimizes the variability among the results. The device is coupled to software which automatically calculates the dilution required, so, the prior dilution of the sample is not required before BOD estimation. The developed BOD-Biosensor makes use of immobilized microorganisms to sense the biochemical oxygen demand of industrial wastewaters having low–moderate–high biodegradability. The method is quick, robust, online and less time consuming. Findings: The results of extensive testing of the developed biosensor on drains demonstrate that the BOD values obtained by the device correlated with conventional BOD values the observed R2 value was 0.995. The reproducibility of the measurements with the BOD biosensor was within a percentage deviation of ±10%. Advantages of developed BOD biosensor • Determines the water pollution quickly in 2 hours of time; • Determines the water pollution of all types of waste water; • Has prolonged shelf life of more than 400 days; • Enhanced repeatability and reproducibility values; • Elimination of COD estimation. Distinctiveness of Technology: • Bio-component: can determine BOD load of all types of waste water; • Immobilization: increased shelf life > 400 days, extended stability and viability; • Software: Reduces manual errors, reduction in estimation time. Conclusion: BiosensorBOD can be used to measure the BOD value of the real wastewater samples. The BOD biosensor showed good reproducibility in the results. This technology is useful in deciding treatment strategies well ahead and so facilitating discharge of properly treated water to common water bodies. The developed technology has been transferred to M/s Forbes Marshall Pvt Ltd, Pune.Keywords: biosensor, biochemical oxygen demand, immobilized, monitoring, Yamuna
Procedia PDF Downloads 279144 X-Ray Detector Technology Optimization In CT Imaging
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 274143 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus
Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta
Abstract:
Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.Keywords: co-infection, dengue, reproductive fitness, viral quantification
Procedia PDF Downloads 203142 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition
Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang
Abstract:
Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model
Procedia PDF Downloads 111141 Improvement of Activity of β-galactosidase from Kluyveromyces lactis via Immobilization on Polyethylenimine-Chitosan
Authors: Carlos A. C. G. Neto, Natan C. G. e Silva , Thaís de O. Costa, Luciana R. B. Gonçalves, Maria V. P. Rocha
Abstract:
β-galactosidases (E.C. 3.2.1.23) are enzymes that have attracted by catalyzing the hydrolysis of lactose and in producing galacto-oligosaccharides by favoring transgalactosylation reactions. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers is a promising alternative to enhance the stability of the biocatalysts, among which polyethylenimine (PEI) stands out. PEI has certain properties, such as being a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases. Besides that, protects them from environmental variations. The use of chitosan support coated with PEI could improve the catalytic efficiency of β-galactosidase from Kluyveromyces lactis in the transgalactosylation reaction for the production of prebiotics, such as lactulose since this strain is more effective in the hydrolysis reaction. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from K. lactis immobilized on chitosan-coated with PEI, determining the immobilization parameters, its operational and thermal stability, and then to apply it in hydrolysis and transgalactolisation reactions to produce lactulose using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% (v/v) glutaraldehyde and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg protein per gram support. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey supplemented with fructose at a ratio of 1:2 lactose/fructose, totaling 200 g/L. Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6 with 0.1 mM MnCl2. The biocatalyst whose support was coated was named CHI_GLU_PEI_GAL, and the one that was not coated was named CHI_GLU_GAL. The coating of the support with PEI considerably improved the parameters of immobilization. The immobilization yield increased from 56.53% to 97.45%, biocatalyst activity from 38.93 U/g to 95.26 U/g and the efficiency from 3.51% to 6.0% for uncoated and coated support, respectively. The biocatalyst CHI_GLU_PEI_GAL was better than CHI_GLU_GAL in the hydrolysis of lactose and production of lactulose, converting 97.05% of lactose at 5 min of reaction and producing 7.60 g/L lactulose in the same time interval. QUI_GLU_PEI_GAL biocatalyst was stable in the hydrolysis reactions of lactose during the 10 cycles evaluated, converting 73.45% lactose even after the tenth cycle, and in the lactulose production was stable until the fifth cycle evaluated, producing 10.95 g/L lactulose. However, the thermal stability of CHI_GLU_GAL biocatalyst was superior, with a half-life time 6 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (CHI_GLU_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as, the catalytic action of the enzyme. Besides that, this process can be economically viable due to the use of an industrial residue as a substrate.Keywords: β-galactosidase, immobilization, kluyveromyces lactis, lactulose, polyethylenimine, transgalactosylation reaction, whey
Procedia PDF Downloads 112