Search results for: electric discharge machining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2448

Search results for: electric discharge machining

288 Congenital Diaphragmatic Hernia Outcomes in a Low-Volume Center

Authors: Michael Vieth, Aric Schadler, Hubert Ballard, J. A. Bauer, Pratibha Thakkar

Abstract:

Introduction: Congenital diaphragmatic hernia (CDH) is a condition characterized by the herniation of abdominal contents into the thoracic cavity requiring postnatal surgical repair. Previous literature suggests improved CDH outcomes at high-volume regional referral centers compared to low-volume centers. The purpose of this study was to examine CDH outcomes at Kentucky Children’s Hospital (KCH), a low-volume center, compared to the Congenital Diaphragmatic Hernia Study Group (CDHSG). Methods: A retrospective chart review was performed at KCH from 2007-2019 for neonates with CDH, and then subdivided into two cohorts: those requiring ECMO therapy and those not requiring ECMO therapy. Basic demographic data and measures of mortality and morbidity including ventilator days and length of stay were compared to the CDHSG. Measures of morbidity for the ECMO cohort including duration of ECMO, clinical bleeding, intracranial hemorrhage, sepsis, need for continuous renal replacement therapy (CRRT), need for sildenafil at discharge, timing of surgical repair, and total ventilator days were collected. Statistical analysis was performed using IBM SPSS Statistics version 28. One-sample t-tests and one-sample Wilcoxon Signed Rank test were utilized as appropriate.Results: There were a total of 27 neonatal patients with CDH at KCH from 2007-2019; 9 of the 27 required ECMO therapy. The birth weight and gestational age were similar between KCH and the CDHSG (2.99 kg vs 2.92 kg, p =0.655; 37.0 weeks vs 37.4 weeks, p =0.51). About half of the patients were inborn in both cohorts (52% vs 56%, p =0.676). KCH cohort had significantly more Caucasian patients (96% vs 55%, p=<0.001). Unadjusted mortality was similar in both groups (KCH 70% vs CDHSG 72%, p =0.857). Using ECMO utilization (KCH 78% vs CDHSG 52%, p =0.118) and need for surgical repair (KCH 95% vs CDHSG 85%, p =0.060) as proxy for severity, both groups’ mortality were comparable. No significant difference was noted for pulmonary outcomes such as average ventilator days (KCH 43.2 vs. CDHSG 17.3, p =0.078) and home oxygen dependency (KCH 44% vs. CDHSG 24%, p =0.108). Average length of hospital stay for patients treated at KCH was similar to CDHSG (64.4 vs 49.2, p=1.000). Conclusion: Our study demonstrates that outcome in CDH patients is independent of center’s case volume status. Management of CDH with a standardized approach in a low-volume center can yield similar outcomes. This data supports the treatment of patients with CDH at low-volume centers as opposed to transferring to higher-volume centers.

Keywords: ECMO, case volume, congenital diaphragmatic hernia, congenital diaphragmatic hernia study group, neonate

Procedia PDF Downloads 96
287 Lipid Extraction from Microbial Cell by Electroporation Technique and Its Influence on Direct Transesterification for Biodiesel Synthesis

Authors: Abu Yousuf, Maksudur Rahman Khan, Ahasanul Karim, Amirul Islam, Minhaj Uddin Monir, Sharmin Sultana, Domenico Pirozzi

Abstract:

Traditional biodiesel feedstock like edible oils or plant oils, animal fats and cooking waste oil have been replaced by microbial oil in recent research of biodiesel synthesis. The well-known community of microbial oil producers includes microalgae, oleaginous yeast and seaweeds. Conventional transesterification of microbial oil to produce biodiesel is lethargic, energy consuming, cost-ineffective and environmentally unhealthy. This process follows several steps such as microbial biomass drying, cell disruption, oil extraction, solvent recovery, oil separation and transesterification. Therefore, direct transesterification of biodiesel synthesis has been studying for last few years. It combines all the steps in a single reactor and it eliminates the steps of biomass drying, oil extraction and separation from solvent. Apparently, it seems to be cost-effective and faster process but number of difficulties need to be solved to make it large scale applicable. The main challenges are microbial cell disruption in bulk volume and make faster the esterification reaction, because water contents of the medium sluggish the reaction rate. Several methods have been proposed but none of them is up to the level to implement in large scale. It is still a great challenge to extract maximum lipid from microbial cells (yeast, fungi, algae) investing minimum energy. Electroporation technique results a significant increase in cell conductivity and permeability caused due to the application of an external electric field. Electroporation is required to alter the size and structure of the cells to increase their porosity as well as to disrupt the microbial cell walls within few seconds to leak out the intracellular lipid to the solution. Therefore, incorporation of electroporation techniques contributed in direct transesterification of microbial lipids by increasing the efficiency of biodiesel production rate.

Keywords: biodiesel, electroporation, microbial lipids, transesterification

Procedia PDF Downloads 281
286 Investigating Constructions and Operation of Internal Combustion Engine Water Pumps

Authors: Michał Gęca, Konrad Pietrykowski, Grzegorz Barański

Abstract:

The water pump in the compression-ignition internal combustion engine transports a hot coolant along a system of ducts from the engine block to the radiator where coolant temperature is lowered. This part needs to maintain a constant volumetric flow rate. Its power should be regulated to avoid a significant drop in pressure if a coolant flow decreases. The internal combustion engine cooling system uses centrifugal pumps for suction. The paper investigates 4 constructions of engine pumps. The pumps are from diesel engine of a maximum power of 75 kW. Each of them has a different rotor shape, diameter and width. The test stand was created and the geometry inside the all 4 engine blocks was mapped. For a given pump speed on the inverter of the electric engine motor, the valve position was changed and volumetric flow rate, pressure, and power were recorded. Pump speed was regulated from 1200 RPM to 7000 RPM every 300 RPM. The volumetric flow rates and pressure drops for the pump speeds and efficiencies were specified. Accordingly, the operations of each pump were mapped. Our research was to select a pump for the aircraft compression-ignition engine. There was calculated a pressure drop at a given flow on the block and radiator of the designed aircraft engine. The water pump should be lightweight and have a low power demand. This fact shall affect the shape of a rotor and bearings. The pump volumetric flow rate was assumed as 3 kg/s (previous AVL BOOST research model) where the temperature difference was 5°C between the inlet (90°C) and outlet (95°C). Increasing pump speed above the boundary flow power defined by pressure and volumetric flow rate does not increase it but pump efficiency decreases. The maximum total pump efficiency (PCC) is 45-50%. When the pump is driven by low speeds with a 90% closed valve, its overall efficiency drops to 15-20%. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: aircraft engine, diesel engine, flow, water pump

Procedia PDF Downloads 252
285 Blood Flow Estimator of the Left Ventricular Assist Device Based in Look-Up-Table: In vitro Tests

Authors: Tarcisio F. Leao, Bruno Utiyama, Jeison Fonseca, Eduardo Bock, Aron Andrade

Abstract:

This work presents a blood flow estimator based in Look-Up-Table (LUT) for control of Left Ventricular Assist Device (LVAD). This device has been used as bridge to transplantation or as destination therapy to treat patients with heart failure (HF). Destination Therapy application requires a high performance LVAD; thus, a stable control is important to keep adequate interaction between heart and device. LVAD control provides an adequate cardiac output while sustaining an appropriate flow and pressure blood perfusion, also described as physiologic control. Because thrombus formation and system reliability reduction, sensors are not desirable to measure these variables (flow and pressure blood). To achieve this, control systems have been researched to estimate blood flow. LVAD used in the study is composed by blood centrifugal pump, control, and power supply. This technique used pump and actuator (motor) parameters of LVAD, such as speed and electric current. Estimator relates electromechanical torque (motor or actuator) and hydraulic power (blood pump) via LUT. An in vitro Mock Loop was used to evaluate deviations between blood flow estimated and actual. A solution with glycerin (50%) and water was used to simulate the blood viscosity with hematocrit 45%. Tests were carried out with variation hematocrit: 25%, 45% and 58% of hematocrit, or 40%, 50% and 60% of glycerin in water solution, respectively. Test with bovine blood was carried out (42% hematocrit). Mock Loop is composed: reservoir, tubes, pressure and flow sensors, and fluid (or blood), beyond LVAD. Estimator based in LUT is patented, number BR1020160068363, in Brazil. Mean deviation is 0.23 ± 0.07 L/min for mean flow estimated. Larger mean deviation was 0.5 L/min considering hematocrit variation. This estimator achieved deviation adequate for physiologic control implementation. Future works will evaluate flow estimation performance in control system of LVAD.

Keywords: blood pump, flow estimator, left ventricular assist device, look-up-table

Procedia PDF Downloads 186
284 Static Charge Control Plan for High-Density Electronics Centers

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.

Keywords: electrostatics, ESD protocols, HBM, static charge control

Procedia PDF Downloads 129
283 Contribution at Dimensioning of the Energy Dissipation Basin

Authors: M. Aouimeur

Abstract:

The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.

Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment

Procedia PDF Downloads 583
282 Photoelectrical Stimulation for Cancer Therapy

Authors: Mohammad M. Aria, Fatma Öz, Yashar Esmaeilian, Marco Carofiglio, Valentina Cauda, Özlem Yalçın

Abstract:

Photoelectrical stimulation of cells with semiconductor organic polymers have been shown promising applications in neuroprosthetics such as retinal prosthesis. Photoelectrical stimulation of the cell membranes can be induced through a photo-electric charge separation mechanism in the semiconductor materials, and it can alter intracellular calcium level through both stimulation of voltage-gated ion channels and increase of intracellular reactive oxygen species (ROS) level. On the other hand, targeting voltage-gated ion channels in cancer cells to induce cell apoptosis through calcium signaling alternation is an effective mechanism which has been explained before. In this regard, remote control of the voltage-gated ion channels aimed to alter intracellular calcium by using photo-active organic polymers can be novel technology in cancer therapy. In this study, we used P (ITO/Indium thin oxide)/P3HT(poly(3-hexylthiophene-2,5-diyl)) and PN (ITO/ZnO/P3HT) photovoltaic junctions to stimulate MDA-MB-231 breast cancer cells. We showed that the photo-stimulation of breast cancer cells through photo capacitive current generated by the photovoltaic junctions are able to excite the cells and alternate intracellular calcium based on the calcium imaging (at 8mW/cm² green light intensity and 10-50 ms light durations), which has been reported already to safety stimulate neurons. The control group did not undergo light treatment and was cultured in T-75 flasks. We detected 20-30% cell death for ITO/P3HT and 51-60% cell death for ITO/ZnO/P3HT samples in the light treated MDA-MB-231 cell group. Western blot analysis demonstrated poly(ADP-ribose) polymerase (PARP) activated cell death in the light treated group. Furthermore, Annexin V and PI fluorescent staining indicated both apoptosis and necrosis in treated cells. In conclusion, our findings revealed that the photoelectrical stimulation of cells (through long time overstimulation) can induce cell death in cancer cells.

Keywords: Ca²⁺ signaling, cancer therapy, electrically excitable cells, photoelectrical stimulation, voltage-gated ion channels

Procedia PDF Downloads 177
281 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors

Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson

Abstract:

The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.

Keywords: cooking, indoor air quality, low-cost sensor, ventilation

Procedia PDF Downloads 113
280 Effects of Plasma Technology in Biodegradable Films for Food Packaging

Authors: Viviane P. Romani, Bradley D. Olsen, Vilásia G. Martins

Abstract:

Biodegradable films for food packaging have gained growing attention due to environmental pollution caused by synthetic films and the interest in the better use of resources from nature. Important research advances were made in the development of materials from proteins, polysaccharides, and lipids. However, the commercial use of these new generation of sustainable materials for food packaging is still limited due to their low mechanical and barrier properties that could compromise the food quality and safety. Thus, strategies to improve the performance of these materials have been tested, such as chemical modifications, incorporation of reinforcing structures and others. Cold plasma is a versatile, fast and environmentally friendly technology. It consists of a partially ionized gas containing free electrons, ions, and radicals and neutral particles able to react with polymers and start different reactions, leading to the polymer degradation, functionalization, etching and/or cross-linking. In the present study, biodegradable films from fish protein prepared through the casting technique were plasma treated using an AC glow discharge equipment. The reactor was preliminary evacuated to ~7 Pa and the films were exposed to air plasma for 2, 5 and 8 min. The films were evaluated by their mechanical and water vapor permeability (WVP) properties and changes in the protein structure were observed using Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). Potential cross-links and elimination of surface defects by etching might be the reason for the increase in tensile strength and decrease in the elongation at break observed. Among the times of plasma application tested, no differences were observed when higher times of exposure were used. The X-ray pattern showed a broad peak at 2θ = 19.51º that corresponds to the distance of 4.6Å by applying the Bragg’s law. This distance corresponds to the average backbone distance within the α-helix. Thus, the changes observed in the films might indicate that the helical configuration of fish protein was disturbed by plasma treatment. SEM images showed surface damage in the films with 5 and 8 min of plasma treatment, indicating that 2 min was the most adequate time of treatment. It was verified that plasma removes water from the films once weight loss of 4.45% was registered for films treated during 2 min. However, after 24 h in 50% of relative humidity, the water lost was recovered. WVP increased from 0.53 to 0.65 g.mm/h.m².kPa after plasma treatment during 2 min, that is desired for some foods applications which require water passage through the packaging. In general, the plasma technology affects the properties and structure of fish protein films. Since this technology changes the surface of polymers, these films might be used to develop multilayer materials, as well as to incorporate active substances in the surface to obtain active packaging.

Keywords: fish protein films, food packaging, improvement of properties, plasma treatment

Procedia PDF Downloads 163
279 Designing of Induction Motor Efficiency Monitoring System

Authors: Ali Mamizadeh, Ires Iskender, Saeid Aghaei

Abstract:

Energy is one of the important issues with high priority property in the world. Energy demand is rapidly increasing depending on the growing population and industry. The useable energy sources in the world will be insufficient to meet the need for energy. Therefore, the efficient and economical usage of energy sources is getting more importance. In a survey conducted among electric consuming machines, the electrical machines are consuming about 40% of the total electrical energy consumed by electrical devices and 96% of this consumption belongs to induction motors. Induction motors are the workhorses of industry and have very large application areas in industry and urban systems like water pumping and distribution systems, steel and paper industries and etc. Monitoring and the control of the motors have an important effect on the operating performance of the motor, driver selection and replacement strategy management of electrical machines. The sensorless monitoring system for monitoring and calculating efficiency of induction motors are studied in this study. The equivalent circuit of IEEE is used in the design of this study. The terminal current and voltage of induction motor are used in this motor to measure the efficiency of induction motor. The motor nameplate information and the measured current and voltage are used in this system to calculate accurately the losses of induction motor to calculate its input and output power. The efficiency of the induction motor is monitored online in the proposed method without disconnecting the motor from the driver and without adding any additional connection at the motor terminal box. The proposed monitoring system measure accurately the efficiency by including all losses without using torque meter and speed sensor. The monitoring system uses embedded architecture and does not need to connect to a computer to measure and log measured data. The conclusion regarding the efficiency, the accuracy and technical and economical benefits of the proposed method are presented. The experimental verification has been obtained on a 3 phase 1.1 kW, 2-pole induction motor. The proposed method can be used for optimal control of induction motors, efficiency monitoring and motor replacement strategy.

Keywords: induction motor, efficiency, power losses, monitoring, embedded design

Procedia PDF Downloads 349
278 Valorization of Lignocellulosic Wastes– Evaluation of Its Toxicity When Used in Adsorption Systems

Authors: Isabel Brás, Artur Figueirinha, Bruno Esteves, Luísa P. Cruz-Lopes

Abstract:

The agriculture lignocellulosic by-products are receiving increased attention, namely in the search for filter materials that retain contaminants from water. These by-products, specifically almond and hazelnut shells are abundant in Portugal once almond and hazelnuts production is a local important activity. Hazelnut and almond shells have as main constituents lignin, cellulose and hemicelluloses, water soluble extractives and tannins. Along the adsorption of heavy metals from contaminated waters, water soluble compounds can leach from shells and have a negative impact in the environment. Usually, the chemical characterization of treated water by itself may not show environmental impact caused by the discharges when parameters obey to legal quality standards for water. Only biological systems can detect the toxic effects of the water constituents. Therefore, the evaluation of toxicity by biological tests is very important when deciding the suitability for safe water discharge or for irrigation applications. The main purpose of the present work was to assess the potential impacts of waters after been treated for heavy metal removal by hazelnut and almond shells adsorption systems, with short term acute toxicity tests. To conduct the study, water at pH 6 with 25 mg.L-1 of lead, was treated with 10 g of shell per litre of wastewater, for 24 hours. This procedure was followed for each bark. Afterwards the water was collected for toxicological assays; namely bacterial resistance, seed germination, Lemna minor L. test and plant grow. The effect in isolated bacteria strains was determined by disc diffusion method and the germination index of seed was evaluated using lettuce, with temperature and humidity germination control for 7 days. For aquatic higher organism, Lemnas were used with 4 days contact time with shell solutions, in controlled light and temperature. For terrestrial higher plants, biomass production was evaluated after 14 days of tomato germination had occurred in soil, with controlled humidity, light and temperature. Toxicity tests of water treated with shells revealed in some extent effects in the tested organisms, with the test assays showing a close behaviour as the control, leading to the conclusion that its further utilization may not be considered to create a serious risk to the environment.

Keywords: lignocellulosic wastes, adsorption, acute toxicity tests, risk assessment

Procedia PDF Downloads 367
277 Bridging the Gap: Living Machine in Educational Nature Preserve Center

Authors: Zakeia Benmoussa

Abstract:

Pressure on freshwater systems comes from removing too much water to grow crops; contamination from economic activities, land use practices, and human waste. The paper will be focusing on how water management can influence the design, implementation, and impacts of the ecological principles of biomimicry as sustainable methods in recycling wastewater. At Texas State, United States of America, in particular the lower area of the Trinity River refuge, there is a true example of the diversity to be found in that area, whether when exploring the lands or the waterways. However, as the Trinity River supplies water to the state’s residents, the lower part of the river at Liberty County presents several problem of wastewater discharge in the river. Therefore, conservation efforts are particularly important in the Trinity River basin. Clearly, alternative ways must be considered in order to conserve water to meet future demands. As a result, there should be another system provided rather than the conventional water treatment. Mimicking ecosystem's technologies out of context is not enough, but if we incorporate plants into building architecture, in addition to their beauty, they can filter waste, absorb excess water, and purify air. By providing an architectural proposal center, a living system can be explored through several methods that influence natural resources on the micro-scale in order to impact sustainability on the macro-scale. The center consists of an ecological program of Plant and Water Biomimicry study which becomes a living organism that purifies the river water in a natural way through architecture. Consequently, a rich beautiful nature could be used as an educational destination, observation and adventure, as well as providing unpolluted fresh water to the major cities of Texas. As a result, these facts raise a couple of questions: Why is conservation so rarely practiced by those who must extract a living from the land? Are we sufficiently enlightened to realize that we must now challenge that dogma? Do architects respond to the environment and reflect on it in the correct way through their public projects? The method adopted in this paper consists of general research into careful study of the system of the living machine, in how to integrate it at architectural level, and finally, the consolidation of the all the conclusions formed into design proposal. To summarise, this paper attempts to provide a sustainable alternative perspective in bridging physical and mental interaction with biodiversity to enhance nature by using architecture.

Keywords: Biodiversity, Design with Nature, Sustainable architecture, Waste water treatment.

Procedia PDF Downloads 297
276 Polymeric Composites with Synergetic Carbon and Layered Metallic Compounds for Supercapacitor Application

Authors: Anukul K. Thakur, Ram Bilash Choudhary, Mandira Majumder

Abstract:

In this technologically driven world, it is requisite to develop better, faster and smaller electronic devices for various applications to keep pace with fast developing modern life. In addition, it is also required to develop sustainable and clean sources of energy in this era where the environment is being threatened by pollution and its severe consequences. Supercapacitor has gained tremendous attention in the recent years because of its various attractive properties such as it is essentially maintenance-free, high specific power, high power density, excellent pulse charge/discharge characteristics, exhibiting a long cycle-life, require a very simple charging circuit and safe operation. Binary and ternary composites of conducting polymers with carbon and other layered transition metal dichalcogenides have shown tremendous progress in the last few decades. Compared with bulk conducting polymer, these days conducting polymers have gained more attention because of their high electrical conductivity, large surface area, short length for the ion transport and superior electrochemical activity. These properties make them very suitable for several energy storage applications. On the other hand, carbon materials have also been studied intensively, owing to its rich specific surface area, very light weight, excellent chemical-mechanical property and a wide range of the operating temperature. These have been extensively employed in the fabrication of carbon-based energy storage devices and also as an electrode material in supercapacitors. Incorporation of carbon materials into the polymers increases the electrical conductivity of the polymeric composite so formed due to high electrical conductivity, high surface area and interconnectivity of the carbon. Further, polymeric composites based on layered transition metal dichalcogenides such as molybdenum disulfide (MoS2) are also considered important because they are thin indirect band gap semiconductors with a band gap around 1.2 to 1.9eV. Amongst the various 2D materials, MoS2 has received much attention because of its unique structure consisting of a graphene-like hexagonal arrangement of Mo and S atoms stacked layer by layer to give S-Mo-S sandwiches with weak Van-der-Waal forces between them. It shows higher intrinsic fast ionic conductivity than oxides and higher theoretical capacitance than the graphite.

Keywords: supercapacitor, layered transition-metal dichalcogenide, conducting polymer, ternary, carbon

Procedia PDF Downloads 256
275 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization

Authors: Cheng-Jui Li, Chien-Chou Tseng

Abstract:

This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.

Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray

Procedia PDF Downloads 284
274 Food Waste and Sustainable Management

Authors: Farhana Nosheen, Moeez Ahmad

Abstract:

Throughout the food chain, the food waste from initial agricultural production to final household consumption has become a serious concern for global sustainability because of its adverse impacts on food security, natural resources, the environment, and human health. About a third of tomatoes (Lycopersicon esculentum L.) delivered to processing plants end as processing waste. The amount of such waste material is estimated to have increased with the emergence of mechanical harvesting. Experiments were made to determine the nutritional profile and antioxidant activity of tomato processing waste and to explore the bioactive compound in tomato waste, i.e., Lycopene. Tomato Variety of ‘SAHARA F1’ was used to make tomato waste. The tomatoes were properly cleaned, and then unwanted impurities were removed properly. The tomatoes were blanched at 90 ℃ for 5 minutes. After which, the skin of the tomatoes was removed, and the remaining part passed through the electric pulper. The pulp and seeds were collected separately. The seeds and skin of tomatoes were mixed and saved in a sterilized jar. The samples of tomato waste were found to contain 89.11±0.006 g/100g moisture, 10.13±0.115 g/100g protein, 2.066±0.57 g/100g fat, 4.81±0.10 g/100g crude fiber, and 4.06±0.057 g/100g ash and NFE 78.92±0.066 g/100g. The results confirmed that tomato waste contains a considerable amount of Lycopene 51.0667±0.00577 mg/100g and exhibited good antioxidant properties. Total phenolics showed average contents of 122.9600±0.01000 mg GAE/100g, of which flavonoids accounted for 41.5367±0.00577 mg QE/100g. Antioxidant activity of tomato processing waste was found 0.6833±0.00577 mmol Trolox/100g. Unsaturated fatty acids represent the major portion of total fatty acids, Linoleic acid being the major one. The mineral content of tomato waste showed a good amount of potassium 3030.1767 mg/100g and calcium 131.80 mg/100g, respectively were present in it. These findings suggest that tomato processing waste is rich in nutrients, antioxidants, fatty acids, and minerals. I recommend that this waste should be sun-dried to be used in the combination of feed of the animals. It can also be used in making some other products like lycopene tea or several other health-beneficial products.

Keywords: food waste, tomato, bioactive compound, sustainable management

Procedia PDF Downloads 109
273 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures

Authors: Nicky Wilson, Graeme Ralph

Abstract:

Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.

Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories

Procedia PDF Downloads 79
272 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 91
271 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 318
270 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example

Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen

Abstract:

Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.

Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse

Procedia PDF Downloads 58
269 To Compare the Visual Outcome, Safety and Efficacy of Phacoemulsification and Small-Incision Cataract Surgery (SICS) at CEITC, Bangladesh

Authors: Rajib Husain, Munirujzaman Osmani, Mohammad Shamsal Islam

Abstract:

Purpose: To compare the safety, efficacy and visual outcome of phacoemulsification vs. manual small-incision cataract surgery (SICS) for the treatment of cataract in Bangladesh. Objectives: 1. To assess the Visual outcome after cataract surgery 2. To understand the post-operative complications and early rehabilitation 3. To identified which surgical procedure more attractive to the patients 4. To identify which surgical procedure is occurred fewer complications. 5. To find out the socio-economic and demographic characteristics of study patients Setting: Chittagong Eye Infirmary and Training Complex, Chittagong, Bangladesh. Design: Retrospective, randomised comparison of 300 patients with visually significant cataracts. Method: The present study was designed as a retrospective hospital-based research. The sample size was 300 and study period was from July, 2012 to July, 2013 and assigned randomly to receive either phacoemulsification or manual small-incision cataract surgery (SICS). Preoperative and post-operative data were collected through a well designed collection format. Three follow-up were done; i) during discharge ii) 1-3 weeks & iii) 4-11 weeks post operatively. All preoperative and surgical complications, uncorrected and best-corrected visual acuity (BCVA) and astigmatism were taken into consideration for comparison of outcome Result: Nearly 95% patients were more than 40 years of age. About 52% patients were female, and 48% were male. 52% (N=157) patients came to operate their first eye where 48% (N=143) patients were visited again to operate their second eye. Postoperatively, five eyes (3.33%) developed corneal oedema with >10 Descemets folds, and six eyes (4%) had corneal oedema with <10 Descemets folds for Phacoemulsification surgeries. For SICS surgeries, seven eyes (4.66%) developed corneal oedema with >10 Descemets folds and eight eyes (5.33%) had corneal oedema with < 10 descemets folds. However, both the uncorrected and corrected (4-11 weeks) visual acuities were better in the eyes that had phacoemulsification (p=0.02 and p=0.03), and there was less astigmatism (p=0.001) at 4-11 weeks in the eye that had phacoemulsification. Best-corrected visual acuity (BCVA) of final follow-up 95% (N=253) had a good outcome, borderline 3.10% (N=40) and poor outcome was 1.6% (N=7). The individual surgeon outcome were closer, 95% (BCVA) in SICS and 96% (BCVA) in Phacoemulsification at 4-11 weeks follow-up respectively. Conclusion: outcome of cataract surgery both Phacoemulsification and SICS in CEITC was more satisfactory according to who norms. Both Phacoemulsification and manual small-incision cataract surgery (SICS) shows excellent visual outcomes with low complication rates and good rehabilitation. Phacoemulsification is significantly faster, and modern technology based surgical procedure for cataract treatment.

Keywords: phacoemulsification, SICS, cataract, Bangladesh, visual outcome of SICS

Procedia PDF Downloads 348
268 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 133
267 Applications and Development of a Plug Load Management System That Automatically Identifies the Type and Location of Connected Devices

Authors: Amy Lebar, Kim L. Trenbath, Bennett Doherty, William Livingood

Abstract:

Plug and process loads (PPLs) account for 47% of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering and data storage. A laboratory proof of concept (PoC) demonstrated all but the data storage capabilities and these capabilities were validated using an office building scenario. The PoC can identify when a device is plugged into an outlet and the location of the device in the building. When a device is moved, the PoC’s dashboard and database are automatically updated with the new location. The PoC implements controls to devices from the system dashboard so that devices maintain correct schedules regardless of where they are plugged in within a building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. A system like ATLIS could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.

Keywords: commercial buildings, grid-interactive efficient buildings (GEB), miscellaneous electric loads (MELs), plug loads, plug load management (PLM)

Procedia PDF Downloads 132
266 Effect of Roasting Temperature on the Proximate, Mineral and Antinutrient Content of Pigeon Pea (Cajanus cajan) Ready-to-Eat Snack

Authors: Olaide Ruth Aderibigbe, Oluwatoyin Oluwole

Abstract:

Pigeon pea is one of the minor leguminous plants; though underutilised, it is used traditionally by farmers to alleviate hunger and malnutrition. Pigeon pea is cultivated in Nigeria by subsistence farmers. It is rich in protein and minerals, however, its utilisation as food is only common among the poor and rural populace who cannot afford expensive sources of protein. One of the factors contributing to its limited use is the high antinutrient content which makes it indigestible, especially when eaten by children. The development of value-added products that can reduce the antinutrient content and make the nutrients more bioavailable will increase the utilisation of the crop and contribute to reduction of malnutrition. This research, therefore, determined the effects of different roasting temperatures (130 0C, 140 0C, and 150 0C) on the proximate, mineral and antinutrient component of a pigeon pea snack. The brown variety of pigeon pea seeds were purchased from a local market- Otto in Lagos, Nigeria. The seeds were cleaned, washed, and soaked in 50 ml of water containing sugar and salt (4:1) for 15 minutes, and thereafter the seeds were roasted at 130 0C, 140 0C, and 150 0C in an electric oven for 10 minutes. Proximate, minerals, phytate, tannin and alkaloid content analyses were carried out in triplicates following standard procedures. The results of the three replicates were polled and expressed as mean±standard deviation; a one-way analysis of variance (ANOVA) and the Least Significance Difference (LSD) were carried out. The roasting temperatures significantly (P<0.05) affected the protein, ash, fibre and carbohydrate content of the snack. Ready-to-eat snack prepared by roasting at 150 0C significantly had the highest protein (23.42±0.47%) compared the ones roasted at 130 0C and 140 0C (18.38±1.25% and 20.63±0.45%, respectively). The same trend was observed for the ash content (3.91±0.11 for 150 0C, 2.36±0.15 for 140 0C and 2.26±0.25 for 130 0C), while the fibre and carbohydrate contents were highest at roasting temperature of 130 0C. Iron, zinc, and calcium were not significantly (P<0.5) affected by the different roasting temperatures. Antinutrients decreased with increasing temperature. Phytate levels recorded were 0.02±0.00, 0.06±0.00, and 0.07±0.00 mg/g; tannin levels were 0.50±0.00, 0.57±0.00, and 0.68±0.00 mg/g, while alkaloids levels were 0.51±0.01, 0.78±0.01, and 0.82±0.01 mg/g for 150 0C, 140 0C, and 130 0C, respectively. These results show that roasting at high temperature (150 0C) can be utilised as a processing technique for increasing protein and decreasing antinutrient content of pigeon pea.

Keywords: antinutrients, pigeon pea, protein, roasting, underutilised species

Procedia PDF Downloads 143
265 Geological Structure Identification in Semilir Formation: An Correlated Geological and Geophysical (Very Low Frequency) Data for Zonation Disaster with Current Density Parameters and Geological Surface Information

Authors: E. M. Rifqi Wilda Pradana, Bagus Bayu Prabowo, Meida Riski Pujiyati, Efraim Maykhel Hagana Ginting, Virgiawan Arya Hangga Reksa

Abstract:

The VLF (Very Low Frequency) method is an electromagnetic method that uses low frequencies between 10-30 KHz which results in a fairly deep penetration. In this study, the VLF method was used for zonation of disaster-prone areas by identifying geological structures in the form of faults. Data acquisition was carried out in Trimulyo Region, Jetis District, Bantul Regency, Special Region of Yogyakarta, Indonesia with 8 measurement paths. This study uses wave transmitters from Japan and Australia to obtain Tilt and Elipt values that can be used to create RAE (Rapat Arus Ekuivalen or Current Density) sections that can be used to identify areas that are easily crossed by electric current. This section will indicate the existence of a geological structure in the form of faults in the study area which is characterized by a high RAE value. In data processing of VLF method, it is obtained Tilt vs Elliptical graph and Moving Average (MA) Tilt vs Moving Average (MA) Elipt graph of each path that shows a fluctuating pattern and does not show any intersection at all. Data processing uses Matlab software and obtained areas with low RAE values that are 0%-6% which shows medium with low conductivity and high resistivity and can be interpreted as sandstone, claystone, and tuff lithology which is part of the Semilir Formation. Whereas a high RAE value of 10% -16% which shows a medium with high conductivity and low resistivity can be interpreted as a fault zone filled with fluid. The existence of the fault zone is strengthened by the discovery of a normal fault on the surface with strike N550W and dip 630E at coordinates X= 433256 and Y= 9127722 so that the activities of residents in the zone such as housing, mining activities and other activities can be avoided to reduce the risk of natural disasters.

Keywords: current density, faults, very low frequency, zonation

Procedia PDF Downloads 175
264 Thermoelectric Cooler As A Heat Transfer Device For Thermal Conductivity Test

Authors: Abdul Murad Zainal Abidin, Azahar Mohd, Nor Idayu Arifin, Siti Nor Azila Khalid, Mohd Julzaha Zahari Mohamad Yusof

Abstract:

A thermoelectric cooler (TEC) is an electronic component that uses ‘peltier’ effect to create a temperature difference by transferring heat between two electrical junctions of two different types of materials. TEC can also be used for heating by reversing the electric current flow and even power generation. A heat flow meter (HFM) is an equipment for measuring thermal conductivity of building materials. During the test, water is used as heat transfer medium to cool the HFM. The existing re-circulating cooler in the market is very costly, and the alternative is to use piped tap water to extract heat from HFM. However, the tap water temperature is insufficiently low to enable heat transfer to take place. The operating temperature for isothermal plates in the HFM is 40°C with the range of ±0.02°C. When the temperature exceeds the operating range, the HFM stops working, and the test cannot be conducted. The aim of the research is to develop a low-cost but energy-efficient TEC prototype that enables heat transfer without compromising the function of the HFM. The objectives of the research are a) to identify potential of TEC as a cooling device by evaluating its cooling rate and b) to determine the amount of water savings using TEC compared to normal tap water. Four (4) peltier sets were used, with two (2) sets used as pre-cooler. The cooling water is re-circulated from the reservoir into HFM using a water pump. The thermal conductivity readings, the water flow rate, and the power consumption were measured while the HFM was operating. The measured data has shown decrease in average cooling temperature difference (ΔTave) of 2.42°C and average cooling rate of 0.031°C/min. The water savings accrued from using the TEC is projected to be 8,332.8 litres/year with the application of water re-circulation. The results suggest the prototype has achieved required objectives. Further research will include comparing the cooling rate of TEC prototype against conventional tap water and to optimize its design and performance in terms of size and portability. The possible application of the prototype could also be expanded to portable storage for medicine and beverages.

Keywords: energy efficiency, thermoelectric cooling, pre-cooling device, heat flow meter, sustainable technology, thermal conductivity

Procedia PDF Downloads 155
263 Evaluation of Regional Anaesthesia Practice in Plastic Surgery: A Retrospective Cross-Sectional Study

Authors: Samar Mousa, Ryan Kerstein, Mohanad Adam

Abstract:

Regional anaesthesia has been associated with favourable outcomes in patients undergoing a wide range of surgeries. Beneficial effects have been demonstrated in terms of postoperative respiratory and cardiovascular endpoints, 7-day survival, time to ambulation and hospital discharge, and postoperative analgesia. Our project aimed at assessing the regional anaesthesia practice in the plastic surgery department of Buckinghamshire trust and finding out ways to improve the service in collaboration with the anaesthesia team. It is a retrospective study associated with a questionnaire filled out by plastic surgeons and anaesthetists to get the full picture behind the numbers. The study period was between 1/3/2022 and 23/5/2022 (12 weeks). The operative notes of all patients who had an operation under plastic surgery, whether emergency or elective, were reviewed. The criteria of suitable candidates for the regional block were put by the consultant anaesthetists as follows: age above 16, single surgical site (arm, forearm, leg, foot), no drug allergy, no pre-existing neuropathy, no bleeding disorders, not on ant-coagulation, no infection to the site of the block. For 12 weeks, 1061 operations were performed by plastic surgeons. Local cases were excluded leaving 319 cases. Of the 319, 102 patients were suitable candidates for regional block after applying the previously mentioned criteria. However, only seven patients had their operations under the regional block, and the rest had general anaesthesia that could have been easily avoided. An online questionnaire was filled out by both plastic surgeons and anaesthetists of different training levels to find out the reasons behind the obvious preference for general over regional anaesthesia, even if this was against the patients’ interest. The questionnaire included the following points: training level, time taken to give GA or RA, factors that influence the decision, percentage of RA candidates that had GA, reasons behind this percentage, recommendations. Forty-four clinicians filled out the questionnaire, among which were 23 plastic surgeons and 21 anaesthetists. As regards the training level, there were 21 consultants, 4 associate specialists, 9 registrars, and 10 senior house officers. The actual percentage of patients who were good candidates for RA but had GA instead is 93%. The replies estimated this percentage as between 10-30%. 29% of the respondents thought that this percentage is because of surgeons’ preference to have GA rather than RA for their operations without medical support for the decision. 37% of the replies thought that anaesthetists prefer giving GA even if the patient is a suitable candidate for RA. 22.6% of the replies thought that patients refused to have RA, and 11.3% had other causes. The recommendations were in 5 main accesses, which are protocols and pathways for regional blocks, more training opportunities for anaesthetists on regional blocks, providing a separate block room in the hospital, better communication between surgeons and anaesthetists, patient education about the benefits of regional blocks.

Keywords: regional anaesthesia, regional block, plastic surgery, general anaesthesia

Procedia PDF Downloads 84
262 A LED Warning Vest as Safety Smart Textile and Active Cooperation in a Working Group for Building a Normative Standard

Authors: Werner Grommes

Abstract:

The institute of occupational safety and health works in a working group for building a normative standard for illuminated warning vests and did a lot of experiments and measurements as basic work (cooperation). Intelligent car headlamps are able to suppress conventional warning vests with retro-reflective stripes as a disturbing light. Illuminated warning vests are therefore required for occupational safety. However, they must not pose any danger to the wearer or other persons. Here, the risks of the batteries (lithium types), the maximum brightness (glare) and possible interference radiation from the electronics on the implant carrier must be taken into account. The all-around visibility, as well as the required range, play an important role here. For the study, many luminance measurements of already commercially available LEDs and electroluminescent warning vests, as well as their electromagnetic interference fields and aspects of electrical safety, were measured. The results of this study showed that LED lighting is all far too bright and causes strong glare. The integrated controls with pulse modulation and switching regulators cause electromagnetic interference fields. Rechargeable lithium batteries can explode depending on the temperature range. Electroluminescence brings even more hazards. A test method was developed for the evaluation of visibility at distances of 50, 100, and 150 m, including the interview of test persons. A measuring method was developed for the detection of glare effects at close range with the assignment of the maximum permissible luminance. The electromagnetic interference fields were tested in the time and frequency ranges. A risk and hazard analysis were prepared for the use of lithium batteries. The range of values for luminance and risk analysis for lithium batteries were discussed in the standards working group. These will be integrated into the standard. This paper gives a brief overview of the topics of illuminated warning vests, which takes into account the risks and hazards for the vest wearer or others

Keywords: illuminated warning vest, optical tests and measurements, risks, hazards, optical glare effects, LED, E-light, electric luminescent

Procedia PDF Downloads 113
261 Life Cycle Assessment of Biogas Energy Production from a Small-Scale Wastewater Treatment Plant in Central Mexico

Authors: Joel Bonales, Venecia Solorzano, Carlos Garcia

Abstract:

A great percentage of the wastewater generated in developing countries don’t receive any treatment, which leads to numerous environmental impacts. In response to this, a paradigm change in the current wastewater treatment model based on large scale plants towards a small and medium scale based model has been proposed. Nevertheless, small scale wastewater treatment (SS-WTTP) with novel technologies such as anaerobic digesters, as well as the utilization of derivative co-products such as biogas, still presents diverse environmental impacts which must be assessed. This study consisted in a Life Cycle Assessment (LCA) performed to a SS-WWTP which treats wastewater from a small commercial block in the city of Morelia, Mexico. The treatment performed in the SS-WWTP consists in anaerobic and aerobic digesters with a daily capacity of 5,040 L. Two different scenarios were analyzed: the current plant conditions and a hypothetical energy use of biogas obtained in situ. Furthermore, two different allocation criteria were applied: full impact allocation to the system’s main product (treated water) and substitution credits for replacing Mexican grid electricity (biogas) and clean water pumping (treated water). The results showed that the analyzed plant had bigger impacts than what has been reported in the bibliography in the basis of wastewater volume treated, which may imply that this plant is currently operating inefficiently. The evaluated impacts appeared to be focused in the aerobic digestion and electric generation phases due to the plant’s particular configuration. Additional findings prove that the allocation criteria applied is crucial for the interpretation of impacts and that that the energy use of the biogas obtained in this plant can help mitigate associated climate change impacts. It is concluded that SS-WTTP is a environmentally sound alternative for wastewater treatment from a systemic perspective. However, this type of studies must be careful in the selection of the allocation criteria and replaced products, since these factors have a great influence in the results of the assessment.

Keywords: biogas, life cycle assessment, small scale treatment, wastewater treatment

Procedia PDF Downloads 124
260 Performance Assessment of Horizontal Axis Tidal Turbine with Variable Length Blades

Authors: Farhana Arzu, Roslan Hashim

Abstract:

Renewable energy is the only alternative sources of energy to meet the current energy demand, healthy environment and future growth which is considered essential for essential sustainable development. Marine renewable energy is one of the major means to meet this demand. Turbines (both horizontal and vertical) play a vital role for extraction of tidal energy. The influence of swept area on the performance improvement of tidal turbine is a vital factor to study for the reduction of relatively high power generation cost in marine industry. This study concentrates on performance investigation of variable length blade tidal turbine concept that has already been proved as an efficient way to improve energy extraction in the wind industry. The concept of variable blade length utilizes the idea of increasing swept area through the turbine blade extension when the tidal stream velocity falls below the rated condition to maximize energy capture while blade retracts above rated condition. A three bladed horizontal axis variable length blade horizontal axis tidal turbine was modelled by modifying a standard fixed length blade turbine. Classical blade element momentum theory based numerical investigation has been carried out using QBlade software to predict performance. The results obtained from QBlade were compared with the available published results and found very good agreement. Three major performance parameters (i.e., thrust, moment, and power coefficients) and power output for different blade extensions were studied and compared with a standard fixed bladed baseline turbine at certain operational conditions. Substantial improvement in performance coefficient is observed with the increase in swept area of the turbine rotor. Power generation is found to increase in great extent when operating at below rated tidal stream velocity reducing the associated cost per unit electric power generation.

Keywords: variable length blade, performance, tidal turbine, power generation

Procedia PDF Downloads 276
259 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 393