Search results for: miscellaneous electric loads (MELs)
197 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip
Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar
Abstract:
Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation
Procedia PDF Downloads 186196 Design Development and Qualification of a Magnetically Levitated Blower for C0₂ Scrubbing in Manned Space Missions
Authors: Larry Hawkins, Scott K. Sakakura, Michael J. Salopek
Abstract:
The Marshall Space Flight Center is designing and building a next-generation CO₂ removal system, the Four Bed Carbon Dioxide Scrubber (4BCO₂), which will use the International Space Station (ISS) as a testbed. The current ISS CO2 removal system has faced many challenges in both performance and reliability. Given that CO2 removal is an integral Environmental Control and Life Support System (ECLSS) subsystem, the 4BCO2 Scrubber has been designed to eliminate the shortfalls identified in the current ISS system. One of the key required upgrades was to improve the performance and reliability of the blower that provides the airflow through the CO₂ sorbent beds. A magnetically levitated blower, capable of higher airflow and pressure than the previous system, was developed to meet this need. The design and qualification testing of this next-generation blower are described here. The new blower features a high-efficiency permanent magnet motor, a five-axis, active magnetic bearing system, and a compact controller containing both a variable speed drive and a magnetic bearing controller. The blower uses a centrifugal impeller to pull air from the inlet port and drive it through an annular space around the motor and magnetic bearing components to the exhaust port. Technical challenges of the blower and controller development include survival of the blower system under launch random vibration loads, operation in microgravity, packaging under strict size and weight requirements, and successful operation during 4BCO₂ operational changeovers. An ANSYS structural dynamic model of the controller was used to predict response to the NASA defined random vibration spectrum and drive minor design changes. The simulation results are compared to measurements from qualification testing the controller on a vibration table. Predicted blower performance is compared to flow loop testing measurements. Dynamic response of the system to valve changeovers is presented and discussed using high bandwidth measurements from dynamic pressure probes, magnetic bearing position sensors, and actuator coil currents. The results presented in the paper show that the blower controller will survive launch vibration levels, the blower flow meets the requirements, and the magnetic bearings have adequate load capacity and control bandwidth to maintain the desired rotor position during the valve changeover transients.Keywords: blower, carbon dioxide removal, environmental control and life support system, magnetic bearing, permanent magnet motor, validation testing, vibration
Procedia PDF Downloads 135195 Experimental Analysis of Supersonic Combustion Induced by Shock Wave at the Combustion Chamber of the 14-X Scramjet Model
Authors: Ronaldo de Lima Cardoso, Thiago V. C. Marcos, Felipe J. da Costa, Antonio C. da Oliveira, Paulo G. P. Toro
Abstract:
The 14-X is a strategic project of the Brazil Air Force Command to develop a technological demonstrator of a hypersonic air-breathing propulsion system based on supersonic combustion programmed to flight in the Earth's atmosphere at 30 km of altitude and Mach number 10. The 14-X is under development at the Laboratory of Aerothermodynamics and Hypersonic Prof. Henry T. Nagamatsu of the Institute of Advanced Studies. The program began in 2007 and was planned to have three stages: development of the wave rider configuration, development of the scramjet configuration and finally the ground tests in the hypersonic shock tunnel T3. The install configuration of the model based in the scramjet of the 14-X in the test section of the hypersonic shock tunnel was made to proportionate and test the flight conditions in the inlet of the combustion chamber. Experimental studies with hypersonic shock tunnel require special techniques to data acquisition. To measure the pressure along the experimental model geometry tested we used 30 pressure transducers model 122A22 of PCB®. The piezoeletronic crystals of a piezoelectric transducer pressure when to suffer pressure variation produces electric current (PCB® PIEZOTRONIC, 2016). The reading of the signal of the pressure transducers was made by oscilloscope. After the studies had begun we observed that the pressure inside in the combustion chamber was lower than expected. One solution to improve the pressure inside the combustion chamber was install an obstacle to providing high temperature and pressure. To confirm if the combustion occurs was selected the spectroscopy emission technique. The region analyzed for the spectroscopy emission system is the edge of the obstacle installed inside the combustion chamber. The emission spectroscopy technique was used to observe the emission of the OH*, confirming or not the combustion of the mixture between atmospheric air in supersonic speed and the hydrogen fuel inside of the combustion chamber of the model. This paper shows the results of experimental studies of the supersonic combustion induced by shock wave performed at the Hypersonic Shock Tunnel T3 using the scramjet 14-X model. Also, this paper provides important data about the combustion studies using the model based on the engine of 14-X (second stage of the 14-X Program). Informing the possibility of necessaries corrections to be made in the next stages of the program or in other models to experimental study.Keywords: 14-X, experimental study, ground tests, scramjet, supersonic combustion
Procedia PDF Downloads 387194 Fatigue Truck Modification Factor for Design Truck (CL-625)
Authors: Mohamad Najari, Gilbert Grondin, Marwan El-Rich
Abstract:
Design trucks in standard codes are selected based on the amount of damage they cause on structures-specifically bridges- and roads to represent the real traffic loads. Some limited numbers of trucks are run on a bridge one at a time and the damage on the bridge is recorded for each truck. One design track is also run on the same bridge “n” times -“n” is the number of trucks used previously- to calculate the damage of the design truck on the same bridge. To make these damages equal a reduction factor is needed for that specific design truck in the codes. As the limited number of trucks cannot be the exact representative of real traffic through the life of the structure, these reduction factors are not accurately calculated and they should be modified accordingly. Started on July 2004, the vehicle load data were collected in six weigh in motion (WIM) sites owned by Alberta Transportation for eight consecutive years. This database includes more than 200 million trucks. Having these data gives the opportunity to compare the effect of any standard fatigue trucks weigh and the real traffic load on the fatigue life of the bridges which leads to a modification for the fatigue truck factor in the code. To calculate the damage for each truck, the truck is run on the bridge, moment history of the detail under study is recorded, stress range cycles are counted, and then damage is calculated using available S-N curves. A 2000 lines FORTRAN code has been developed to perform the analysis and calculate the damages of the trucks in the database for all eight fatigue categories according to Canadian Institute of Steel Construction (CSA S-16). Stress cycles are counted using rain flow counting method. The modification factors for design truck (CL-625) are calculated for two different bridge configurations and ten span lengths varying from 1 m to 200 m. The two considered bridge configurations are single-span bridge and four span bridge. This was found to be sufficient and representative for a simply supported span, positive moment in end spans of bridges with two or more spans, positive moment in interior spans of three or more spans, and the negative moment at an interior support of multi-span bridges. The moment history of the mid span is recorded for single-span bridge and, exterior positive moment, interior positive moment, and support negative moment are recorded for four span bridge. The influence lines are expressed by a polynomial expression obtained from a regression analysis of the influence lines obtained from SAP2000. It is found that for design truck (CL-625) fatigue truck factor is varying from 0.35 to 0.55 depending on span lengths and bridge configuration. The detail results will be presented in the upcoming papers. This code can be used for any design trucks available in standard codes.Keywords: bridge, fatigue, fatigue design truck, rain flow analysis, FORTRAN
Procedia PDF Downloads 521193 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target
Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao
Abstract:
High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration
Procedia PDF Downloads 344192 Plasma Arc Burner for Pulverized Coal Combustion
Authors: Gela Gelashvili, David Gelenidze, Sulkhan Nanobashvili, Irakli Nanobashvili, George Tavkhelidze, Tsiuri Sitchinava
Abstract:
Development of new highly efficient plasma arc combustion system of pulverized coal is presented. As it is well-known, coal is one of the main energy carriers by means of which electric and heat energy is produced in thermal power stations. The quality of the extracted coal decreases very rapidly. Therefore, the difficulties associated with its firing and complete combustion arise and thermo-chemical preparation of pulverized coal becomes necessary. Usually, other organic fuels (mazut-fuel oil or natural gas) are added to low-quality coal for this purpose. The fraction of additional organic fuels varies within 35-40% range. This decreases dramatically the economic efficiency of such systems. At the same time, emission of noxious substances in the environment increases. Because of all these, intense development of plasma combustion systems of pulverized coal takes place in whole world. These systems are equipped with Non-Transferred Plasma Arc Torches. They allow practically complete combustion of pulverized coal (without organic additives) in boilers, increase of energetic and financial efficiency. At the same time, emission of noxious substances in the environment decreases dramatically. But, the non-transferred plasma torches have numerous drawbacks, e.g. complicated construction, low service life (especially in the case of high power), instability of plasma arc and most important – up to 30% of energy loss due to anode cooling. Due to these reasons, intense development of new plasma technologies that are free from these shortcomings takes place. In our proposed system, pulverized coal-air mixture passes through plasma arc area that burns between to carbon electrodes directly in pulverized coal muffler burner. Consumption of the carbon electrodes is low and does not need a cooling system, but the main advantage of this method is that radiation of plasma arc directly impacts on coal-air mixture that accelerates the process of thermo-chemical preparation of coal to burn. To ensure the stability of the plasma arc in such difficult conditions, we have developed a power source that provides fixed current during fluctuations in the arc resistance automatically compensated by the voltage change as well as regulation of plasma arc length over a wide range. Our combustion system where plasma arc acts directly on pulverized coal-air mixture is simple. This should allow a significant improvement of pulverized coal combustion (especially low-quality coal) and its economic efficiency. Preliminary experiments demonstrated the successful functioning of the system.Keywords: coal combustion, plasma arc, plasma torches, pulverized coal
Procedia PDF Downloads 161191 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants
Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver
Abstract:
This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants
Procedia PDF Downloads 352190 Analysis of Road Risk in Four French Overseas Territories Compared with Metropolitan France
Authors: Mohamed Mouloud Haddak, Bouthayna Hayou
Abstract:
Road accidents in French overseas territories have been understudied, with relevant data often collected late and incompletely. Although these territories account for only 3% to 4% of road traffic injuries in France, their unique characteristics merit closer attention. Despite lower mobility and, consequently, lower exposure to road risks, the actual road risk in Overseas France is as high or even higher than in Metropolitan France. Significant disparities exist not only between Metropolitan France and Overseas territories but also among the overseas territories themselves. The varying population densities in these regions do not fully explain these differences, as each territory has its own distinct vulnerabilities and road safety challenges. This analysis, based on BAAC data files from 2005 to 2018 for both Metropolitan France and the overseas departments and regions, examines key variables such as gender, age, type of road user, type of obstacle hit, type of trip, road category, traffic conditions, weather, and location of accidents. Logistic regression models were built for each region to investigate the risk factors associated with fatal road accidents, focusing on the probability of being killed versus injured. Due to insufficient data, Mayotte and the Overseas Communities (French Polynesia and New Caledonia) were not included in the models. The findings reveal that road safety is worse in the overseas territories compared to Metropolitan France, particularly for vulnerable road users such as pedestrians and motorized two-wheelers. These territories present an accident profile that sits between that of Metropolitan France and middle-income countries. A pressing need exists to standardize accident data collection between Metropolitan and Overseas France to allow for more detailed comparative analyses. Further epidemiological studies could help identify the specific road safety issues unique to each territory, particularly with regards to socio-economic factors such as social cohesion, which may influence road safety outcomes. Moreover, the lack of data on new modes of travel, such as electric scooters, and the absence of socio-economic details of accident victims complicate the evaluation of emerging risk factors. Additional research, including sociological and psychosocial studies, is essential for understanding road users' behavior and perceptions of road risk, which could also provide valuable insights into accident trends in peri-urban areas in France.Keywords: multivariate logistic regression, french overseas regions, road safety, road traffic accidents, territorial inequalities
Procedia PDF Downloads 10189 Evaluation of Batch Splitting in the Context of Load Scattering
Authors: S. Wesebaum, S. Willeke
Abstract:
Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering
Procedia PDF Downloads 399188 Residual Plastic Deformation Capacity in Reinforced Concrete Beams Subjected to Drop Weight Impact Test
Authors: Morgan Johansson, Joosef Leppanen, Mathias Flansbjer, Fabio Lozano, Josef Makdesi
Abstract:
Concrete is commonly used for protective structures and how impact loading affects different types of concrete structures is an important issue. Often the knowledge gained from static loading is also used in the design of impulse loaded structures. A large plastic deformation capacity is essential to obtain a large energy absorption in an impulse loaded structure. However, the structural response of an impact loaded concrete beam may be very different compared to a statically loaded beam. Consequently, the plastic deformation capacity and failure modes of the concrete structure can be different when subjected to dynamic loads; and hence it is not sure that the observations obtained from static loading are also valid for dynamic loading. The aim of this paper is to investigate the residual plastic deformation capacity in reinforced concrete beams subjected to drop weight impact tests. A test-series consisting of 18 simply supported beams (0.1 x 0.1 x 1.18 m, ρs = 0.7%) with a span length of 1.0 m and subjected to a point load in the beam mid-point, was carried out. 2x6 beams were first subjected to drop weight impact tests, and thereafter statically tested until failure. The drop in weight had a mass of 10 kg and was dropped from 2.5 m or 5.0 m. During the impact tests, a high-speed camera was used with 5 000 fps and for the static tests, a camera was used with 0.5 fps. Digital image correlation (DIC) analyses were conducted and from these the velocities of the beam and the drop weight, as well as the deformations and crack propagation of the beam, were effectively measured. Additionally, for the static tests, the applied load and midspan deformation were measured. The load-deformation relations for the beams subjected to an impact load were compared with 6 reference beams that were subjected to static loading only. The crack pattern obtained were compared using DIC, and it was concluded that the resulting crack formation depended much on the test method used. For the static tests, only bending cracks occurred. For the impact loaded beams, though, distinctive diagonal shear cracks also formed below the zone of impact and less wide shear cracks were observed in the region half-way to the support. Furthermore, due to wave propagation effects, bending cracks developed in the upper part of the beam during initial loading. The results showed that the plastic deformation capacity increased for beams subjected to drop weight impact tests from a high drop height of 5.0 m. For beams subjected to an impact from a low drop height of 2.5 m, though, the plastic deformation capacity was in the same order of magnitude as for the statically loaded reference beams. The beams tested were designed to fail due to bending when subjected to a static load. However, for the impact tested beams, one beam exhibited a shear failure at a significantly reduced load level when it was tested statically; indicating that there might be a risk of reduced residual load capacity for impact loaded structures.Keywords: digital image correlation (DIC), drop weight impact, experiments, plastic deformation capacity, reinforced concrete
Procedia PDF Downloads 145187 Experimental and Numerical Investigation on the Torque in a Small Gap Taylor-Couette Flow with Smooth and Grooved Surface
Authors: L. Joseph, B. Farid, F. Ravelet
Abstract:
Fundamental studies were performed on bifurcation, instabilities and turbulence in Taylor-Couette flow and applied to many engineering applications like astrophysics models in the accretion disks, shrouded fans, and electric motors. Such rotating machinery performances need to have a better understanding of the fluid flow distribution to quantify the power losses and the heat transfer distribution. The present investigation is focused on high gap ratio of Taylor-Couette flow with high rotational speeds, for smooth and grooved surfaces. So far, few works has been done in a very narrow gap and with very high rotation rates and, to the best of our knowledge, not with this combination with grooved surface. We study numerically the turbulent flow between two coaxial cylinders where R1 and R2 are the inner and outer radii respectively, where only the inner is rotating. The gap between the rotor and the stator varies between 0.5 and 2 mm, which corresponds to a radius ratio η = R1/R2 between 0.96 and 0.99 and an aspect ratio Γ= L/d between 50 and 200, where L is the length of the rotor and d being the gap between the two cylinders. The scaling of the torque with the Reynolds number is determined at different gaps for different smooth and grooved surfaces (and also with different number of grooves). The fluid in the gap is air. Re varies between 8000 and 30000. Another dimensionless parameter that plays an important role in the distinction of the regime of the flow is the Taylor number that corresponds to the ratio between the centrifugal forces and the viscous forces (from 6.7 X 105 to 4.2 X 107). The torque will be first evaluated with RANS and U-RANS models, and compared to empirical models and experimental results. A mesh convergence study has been done for each rotor-stator combination. The results of the torque are compared to different meshes in 2D dimensions. For the smooth surfaces, the models used overestimate the torque compared to the empirical equations that exist in the bibliography. The closest models to the empirical models are those solving the equations near to the wall. The greatest torque achieved with grooved surface. The tangential velocity in the gap was always higher in between the rotor and the stator and not on the wall of rotor. Also the greater one was in the groove in the recirculation zones. In order to avoid endwall effects, long cylinders are used in our setup (100 mm), torque is measured by a co-rotating torquemeter. The rotor is driven by an air turbine of an automotive turbo-compressor for high angular velocities. The results of the experimental measurements are at rotational speed of up to 50 000 rpm. The first experimental results are in agreement with numerical ones. Currently, quantitative study is performed on grooved surface, to determine the effect of number of grooves on the torque, experimentally and numerically.Keywords: Taylor-Couette flow, high gap ratio, grooved surface, high speed
Procedia PDF Downloads 406186 A Comprehensive Analysis of Factors Leading to Fatal Road Accidents in France and Its Overseas Territories
Authors: Bouthayna Hayou, Mohamed Mouloud Haddak
Abstract:
In road accidents in French overseas territories have been understudied, with relevant data often collected late and incompletely. Although these territories account for only 3% to 4% of road traffic injuries in France, their unique characteristics merit closer attention. Despite lower mobility and, consequently, lower exposure to road risks, the actual road risk in Overseas France is as high or even higher than in Metropolitan France. Significant disparities exist not only between Metropolitan France and Overseas territories but also among the overseas territories themselves. The varying population densities in these regions do not fully explain these differences, as each territory has its own distinct vulnerabilities and road safety challenges. This analysis, based on BAAC data files from 2005 to 2018 for both Metropolitan France and the overseas departments and regions, examines key variables such as gender, age, type of road user, type of obstacle hit, type of trip, road category, traffic conditions, weather, and location of accidents. Logistic regression models were built for each region to investigate the risk factors associated with fatal road accidents, focusing on the probability of being killed versus injured. Due to insufficient data, Mayotte and the Overseas Communities (French Polynesia and New Caledonia) were not included in the models. The findings reveal that road safety is worse in the overseas territories compared to Metropolitan France, particularly for vulnerable road users such as pedestrians and motorized two-wheelers. These territories present an accident profile that sits between that of Metropolitan France and middle-income countries. A pressing need exists to standardize accident data collection between Metropolitan and Overseas France to allow for more detailed comparative analyses. Further epidemiological studies could help identify the specific road safety issues unique to each territory, particularly with regard to socio-economic factors such as social cohesion, which may influence road safety outcomes. Moreover, the lack of data on new modes of travel, such as electric scooters, and the absence of socio-economic details of accident victims complicate the evaluation of emerging risk factors. Additional research, including sociological and psychosocial studies, is essential for understanding road users' behavior and perceptions of road risk, which could also provide valuable insights into accident trends in peri-urban areas in France.Keywords: multivariate logistic regression, overseas France, road safety, road traffic accident, territorial inequalities
Procedia PDF Downloads 10185 Impact on the Yield of Flavonoid and Total Phenolic Content from Pomegranate Fruit by Different Extraction Methods
Authors: Udeshika Yapa Bandara, Chamindri Witharana, Preethi Soysa
Abstract:
Pomegranate fruits are used in cancer treatment in Ayurveda, Sri Lanka. Due to prevailing therapeutic effects of phytochemicals, this study was focus on anti-cancer properties of the constituents in the parts of Pomegranate fruit. Furthermore, the method of extraction, plays a crucial step of the phytochemical analysis. Therefore, this study was focus on different extraction methods. Five techniques were involved for the peel and the pericarp to evaluate the most effective extraction method; Boiling with electric burner (BL), Sonication (SN), Microwaving (MC), Heating in a 50°C water bath (WB) and Sonication followed by Microwaving (SN-MC). The presence of polyphenolic and flavonoid contents were evaluated to recognize the best extraction method for polyphenols. The total phenolic content was measured spectrophotometrically by Folin-Ciocalteu method and expressed as Gallic Acid Equivalents (w/w% GAE). Total flavonoid content was also determined spectrophotometrically with Aluminium chloride colourimetric assay and expressed as Quercetin Equivalents (w/w % QE). Pomegranate juice was taken as fermented juice (with Saccharomyces bayanus) and fresh juice. Powdered seeds were refluxed, filtered and freeze-dried. 2g of freeze-dried powder of each component was dissolved in 100ml of De-ionized water for extraction. For the comparison of antioxidant activity and total phenol content, the polyphenols were removed by the Polyvinylpolypyrrolidone (PVVP) column and fermented and fresh juice were tested for the 1, 1-diphenyl-2-picrylhydrazil (DPPH) radical scavenging activity, before and after the removal of polyphenols. For the peel samples of Pomegranate fruit, total phenol and flavonoid contents were high in Sonication (SN). In pericarp, total phenol and flavonoid contents were highly exhibited in method of Sonication (SN). A significant difference was observed (P< 0.05) in total phenol and flavonoid contents, between five extraction methods for both peel and pericarp samples. Fermented juice had a greatest polyphenolic and flavonoid contents comparative to fresh juice. After removing polyphenols of fermented juice and fresh juice using Polyvinyl polypyrrolidone (PVVP) column, low antioxidant activity was resulted for DPPH antioxidant activity assay. Seeds had a very low total phenol and flavonoid contents according to the results. Although, Pomegranate peel is the main waste component of the fruit, it has an excellent polyphenolic and flavonoid contents compared to other parts of the fruit, devoid of the method of extraction. Polyphenols play a major role for antioxidant activity.Keywords: antioxidant activity, flavonoids, polyphenols, pomegranate
Procedia PDF Downloads 161184 Climate Change Effects of Vehicular Carbon Monoxide Emission from Road Transportation in Part of Minna Metropolis, Niger State, Nigeria
Authors: H. M. Liman, Y. M. Suleiman A. A. David
Abstract:
Poor air quality often considered one of the greatest environmental threats facing the world today is caused majorly by the emission of carbon monoxide into the atmosphere. The principal air pollutant is carbon monoxide. One prominent source of carbon monoxide emission is the transportation sector. Not much was known about the emission levels of carbon monoxide, the primary pollutant from the road transportation in the study area. Therefore, this study assessed the levels of carbon monoxide emission from road transportation in the Minna, Niger State. The database shows the carbon monoxide data collected. MSA Altair gas alert detector was used to take the carbon monoxide emission readings in Parts per Million for the peak and off-peak periods of vehicular movement at the road intersections. Their Global Positioning System (GPS) coordinates were recorded in the Universal Transverse Mercator (UTM). Bar chart graphs were plotted by using the emissions level of carbon dioxide as recorded on the field against the scientifically established internationally accepted safe limit of 8.7 Parts per Million of carbon monoxide in the atmosphere. Further statistical analysis was also carried out on the data recorded from the field using the Statistical Package for Social Sciences (SPSS) software and Microsoft excel to show the variance of the emission levels of each of the parameters in the study area. The results established that emissions’ level of atmospheric carbon monoxide from the road transportation in the study area exceeded the internationally accepted safe limits of 8.7 parts per million. In addition, the variations in the average emission levels of CO between the four parameters showed that morning peak is having the highest average emission level of 24.5PPM followed by evening peak with 22.84PPM while morning off peak is having 15.33 and the least is evening off peak 12.94PPM. Based on these results, recommendations made for poor air quality mitigation via carbon monoxide emissions reduction from transportation include Introduction of the urban mass transit would definitely reduce the number of traffic on the roads, hence the emissions from several vehicles that would have been on the road. This would also be a cheaper means of transportation for the masses and Encouraging the use of vehicles using alternative sources of energy like solar, electric and biofuel will also result in less emission levels as the these alternative energy sources other than fossil fuel originated diesel and petrol vehicles do not emit especially carbon monoxide.Keywords: carbon monoxide, climate change emissions, road transportation, vehicular
Procedia PDF Downloads 375183 Achieving Product Robustness through Variation Simulation: An Industrial Case Study
Authors: Narendra Akhadkar, Philippe Delcambre
Abstract:
In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation
Procedia PDF Downloads 164182 Biodsorption as an Efficient Technology for the Removal of Phosphate, Nitrate and Sulphate Anions in Industrial Wastewater
Authors: Angel Villabona-Ortíz, Candelaria Tejada-Tovar, Andrea Viera-Devoz
Abstract:
Wastewater treatment is an issue of vital importance in these times where the impacts of human activities are most evident, which have become essential tasks for the normal functioning of society. However, they put entire ecosystems at risk by time destroying the possibility of sustainable development. Various conventional technologies are used to remove pollutants from water. Agroindustrial waste is the product with the potential to be used as a renewable raw material for the production of energy and chemical products, and their use is beneficial since products with added value are generated from materials that were not used before. Considering the benefits that the use of residual biomass brings, this project proposes the use of agro-industrial residues from corn crops for the production of natural adsorbents whose purpose is aimed at the remediation of contaminated water bodies with large loads of nutrients. The adsorption capacity of two biomaterials obtained from the processing of corn stalks was evaluated by batch system tests. Biochar impregnated with sulfuric acid and thermally activated was synthesized. On the other hand, the cellulose was extracted from the corn stalks and chemically modified with cetyltrimethylammonium chloride in order to quaternize the surface of the adsorbent. The adsorbents obtained were characterized by thermogravimetric analysis (TGA), scanning electron microscopy (SEM), infrared spectrometry with Fourier Transform (FTIR), analysis by Brunauer, Emmett and Teller method (BET) and X-ray Diffraction analysis ( XRD), which showed favorable characteristics for the cellulose extraction process. Higher adsorption capacities of the nutrients were obtained with the use of biochar, with phosphate being the anion with the best removal percentages. The effect of the initial adsorbate concentration was evaluated, with which it was shown that the Freundlich isotherm better describes the adsorption process in most systems. The adsorbent-phosphate / nitrate systems fit better to the Pseudo Primer Order kinetic model, while the adsorbent-sulfate systems showed a better fit to the Pseudo second-order model, which indicates that there are both physical and chemical interactions in the process. Multicomponent adsorption tests revealed that phosphate anions have a higher affinity for both adsorbents. On the other hand, the thermodynamic parameters standard enthalpy (ΔH °) and standard entropy (ΔS °) with negative results indicate the exothermic nature of the process, whereas the ascending values of standard Gibbs free energy (ΔG °). The adsorption process of anions with biocarbon and modified cellulose is spontaneous and exothermic. The use of the evaluated biomateriles is recommended for the treatment of industrial effluents contaminated with sulfate, nitrate and phosphate anions.Keywords: adsorption, biochar, modified cellulose, corn stalks
Procedia PDF Downloads 182181 Solid Waste and Its Impact on the Human Health
Authors: Waseem Akram, Hafiz Azhar Ali Khan
Abstract:
Unplanned urbanization together with change in life from simple to more technologically advanced style with flow of rural masses to urban areas has played a vital role in pilling loads of solid wastes in our environment. The cities and towns have expanded beyond boundaries. Even the uncontrolled population expansion has caused the overall environmental burden. Thus, today the indifference remains as one of the biggest trash that has come up due to the non-responsive behavior of the people. Everyday huge amount of solid waste is thrown in the streets, on the roads, parks, and in all those places that are frequently and often visited by the human beings. This behavior based response in many countries of the world has led to serious health concerns and environmental issues. Over 80% of our products that are sold in the market are packed in plastic bags. None of the bags are later recycled but simply become a permanent environment concern that flies, choke lines or are burnt and release toxic gases in the environment or form dumps of heaps. Lack of classification of the daily waste generated from houses and other places lead to worst clogging of the sewerage lines and formation of ponding areas which ultimately favor vector borne disease and sometimes become a cause of transmission of polio virus. Solid waste heaps were checked at different places of the cities. All of the wastes on visual assessments were classified into plastic bags, papers, broken plastic pots, clay pots, steel boxes, wrappers etc. All solid waste dumping sites in the cities and wastes that were thrown outside of the trash containers usually contained wrappers, plastic bags, and unconsumed food products. Insect populations seen in these sites included the house flies, bugs, cockroaches and mosquito larvae breeding in water filled wrappers, containers or plastic bags. The population of the mosquitoes, cockroaches and houseflies were relatively very high in dumping sites close to human population. This population has been associated with cases like dengue, malaria, dysentery, gastro and also to skin allergies during the monsoon and summer season. Thus, dumping of the huge amount of solid wastes in and near the residential areas results into serious environmental concerns, bad smell circulation, and health related issues. In some places, the same waste is burnt to get rid of mosquitoes through smoke which ultimately releases toxic material in the atmosphere. Therefore, a proper environmental strategy is needed to minimize environmental burden and promote concepts of recycled products and thus, reduce the disease burden.Keywords: solid waste accumulation, disease burden, mosquitoes, vector borne diseases
Procedia PDF Downloads 278180 Life Cycle Carbon Dioxide Emissions from the Construction Phase of Highway Sector in China
Authors: Yuanyuan Liu, Yuanqing Wang, Di Li
Abstract:
Carbon dioxide (CO2) emissions mitigation from road construction activities is one of the potential pathways to deal with climate change due to its higher use of materials, machinery energy consumption, and high quantity of vehicle and equipment fuels for transportation and on-site construction activities. Aiming to assess the environmental impact of the road infrastructure construction activities and to identify hotspots of emissions sources, this study developed a life-cycle CO2 emissions assessment framework covering three stages of material production, to-site and on-site transportation under the guidance of the principle of LCA ISO14040. Then streamlined inventory analysis on sub-processes of each stage was conducted based on the budget files from cases of highway projects in China. The calculation results were normalized into functional unit represented as ton per km per lane. Then a comparison between the amount of emissions from each stage, and sub-process was made to identify the major contributor in the whole highway lifecycle. In addition, the calculating results were used to be compared with results in other countries for understanding the level of CO2 emissions associated with Chinese road infrastructure in the world. The results showed that materials production stage produces the most of the CO2 emissions (for more than 80%), and the production of cement and steel accounts for large quantities of carbon emissions. Life cycle CO2 emissions of fuel and electric energy associated with to-site and on-site transportation vehicle and equipment are a minor component of total life cycle CO2 emissions from highway project construction activities. Bridges and tunnels are dominant large carbon contributor compared to the road segments. The life cycle CO2 emissions of road segment in highway project in China are slightly higher than the estimation results of highways in European countries and USA, about 1500 ton per km per lane. In particularly, the life cycle CO2 emissions of road pavement in majority cities all over the world are about 500 ton per km per lane. However, there is obvious difference between the cities when the estimation on life cycle CO2 emissions of highway projects included bridge and tunnel. The findings of the study could offer decision makers a more comprehensive reference to understand the contribution of road infrastructure to climate change, especially understand the contribution from road infrastructure construction activities in China. In addition, the identified hotspots of emissions sources provide the insights of how to reduce road carbon emissions for development of sustainable transportation.Keywords: carbon dioxide emissions, construction activities, highway, life cycle assessment
Procedia PDF Downloads 269179 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 139178 Energy Efficiency Measures in Canada’s Iron and Steel Industry
Authors: A. Talaei, M. Ahiduzzaman, A. Kumar
Abstract:
In Canada, an increase in the production of iron and steel is anticipated for satisfying the increasing demand of iron and steel in the oil sands and automobile industries. It is predicted that GHG emissions from iron and steel sector will show a continuous increase till 2030 and, with emissions of 20 million tonnes of carbon dioxide equivalent, the sector will account for more than 2% of total national GHG emissions, or 12% of industrial emissions (i.e. 25% increase from 2010 levels). Therefore, there is an urgent need to improve the energy intensity and to implement energy efficiency measures in the industry to reduce the GHG footprint. This paper analyzes the current energy consumption in the Canadian iron and steel industries and identifies energy efficiency opportunities to improve the energy intensity and mitigate greenhouse gas emissions from this industry. In order to do this, a demand tree is developed representing different iron and steel production routs and the technologies within each rout. The main energy consumer within the industry is found to be flared heaters accounting for 81% of overall energy consumption followed by motor system and steam generation each accounting for 7% of total energy consumption. Eighteen different energy efficiency measures are identified which will help the efficiency improvement in various subsector of the industry. In the sintering process, heat recovery from coolers provides a high potential for energy saving and can be integrated in both new and existing plants. Coke dry quenching (CDQ) has the same advantages. Within the blast furnace iron-making process, injection of large amounts of coal in the furnace appears to be more effective than any other option in this category. In addition, because coal-powered electricity is being phased out in Ontario (where the majority of iron and steel plants are located) there will be surplus coal that could be used in iron and steel plants. In the steel-making processes, the recovery of Basic Oxygen Furnace (BOF) gas and scrap preheating provides considerable potential for energy savings in BOF and Electric Arc Furnace (EAF) steel-making processes, respectively. However, despite the energy savings potential, the BOF gas recovery is not applicable in existing plants using steam recovery processes. Given that the share of EAF in steel production is expected to increase the application potential of the technology will be limited. On the other hand, the long lifetime of the technology and the expected capacity increase of EAF makes scrap preheating a justified energy saving option. This paper would present the results of the assessment of the above mentioned options in terms of the costs and GHG mitigation potential.Keywords: Iron and Steel Sectors, Energy Efficiency Improvement, Blast Furnace Iron-making Process, GHG Mitigation
Procedia PDF Downloads 396177 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements
Authors: Mohammad R. Bhuyan, Mohammad J. Khattak
Abstract:
Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement
Procedia PDF Downloads 166176 A Green Optically Active Hydrogen and Oxygen Generation System Employing Terrestrial and Extra-Terrestrial Ultraviolet Solar Irradiance
Authors: H. Shahid
Abstract:
Due to Ozone layer depletion on earth, the incoming ultraviolet (UV) radiation is recorded at its high index levels such as 25 in South Peru (13.5° S, 3360 m a.s.l.) Also, the planning of human inhabitation on Mars is under discussion where UV radiations are quite high. The exposure to UV is health hazardous and is avoided by UV filters. On the other hand, artificial UV sources are in use for water thermolysis to generate Hydrogen and Oxygen, which are later used as fuels. This paper presents the utility of employing UVA (315-400nm) and UVB (280-315nm) electromagnetic radiation from the solar spectrum to design and implement an optically active, Hydrogen and Oxygen generation system via thermolysis of desalinated seawater. The proposed system finds its utility on earth and can be deployed in the future on Mars (UVB). In this system, by using Fresnel lens arrays as an optical filter and via active tracking, the ultraviolet light from the sun is concentrated and then allowed to fall on two sub-systems of the proposed system. The first sub-system generates electrical energy by using UV based tandem photovoltaic cells such as GaAs/GaInP/GaInAs/GaInAsP and the second elevates temperature of water to lower the electric potential required to electrolyze the water. An empirical analysis is performed at 30 atm and an electrical potential is observed to be the main controlling factor for the rate of production of Hydrogen and Oxygen and hence the operating point (Q-Point) of the proposed system. The hydrogen production rate in the case of the commercial system in static mode (650ᵒC, 0.6V) is taken as a reference. The silicon oxide electrolyzer cell (SOEC) is used in the proposed (UV) system for the Hydrogen and Oxygen production. To achieve the same amount of Hydrogen as in the case of the reference system, with minimum chamber operating temperature of 850ᵒC in static mode, the corresponding required electrical potential is calculated as 0.3V. However, practically, the Hydrogen production rate is observed to be low in comparison to the reference system at 850ᵒC at 0.3V. However, it has been shown empirically that the Hydrogen production can be enhanced and by raising the electrical potential to 0.45V. It increases the production rate to the same level as is of the reference system. Therefore, 850ᵒC and 0.45V are assigned as the Q-point of the proposed system which is actively stabilized via proportional integral derivative controllers which adjust the axial position of the lens arrays for both subsystems. The functionality of the controllers is based on maintaining the chamber fixed at 850ᵒC (minimum operating temperature) and 0.45V; Q-Point to realize the same Hydrogen production rate as-is for the reference system.Keywords: hydrogen, oxygen, thermolysis, ultraviolet
Procedia PDF Downloads 133175 A Cloud-Based Federated Identity Management in Europe
Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas
Abstract:
Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).Keywords: cybersecurity, identity federation, trust, user authentication
Procedia PDF Downloads 166174 Numerical Investigation of Combustion Chamber Geometry on Combustion Performance and Pollutant Emissions in an Ammonia-Diesel Common Rail Dual-Fuel Engine
Authors: Youcef Sehili, Khaled Loubar, Lyes Tarabet, Mahfoudh Cerdoun, Clement Lacroix
Abstract:
As emissions regulations grow more stringent and traditional fuel sources become increasingly scarce, incorporating carbon-free fuels in the transportation sector emerges as a key strategy for mitigating the impact of greenhouse gas emissions. While the utilization of hydrogen (H2) presents significant technological challenges, as evident in the engine limitation known as knocking, ammonia (NH3) provides a viable alternative that overcomes this obstacle and offers convenient transportation, storage, and distribution. Moreover, the implementation of a dual-fuel engine using ammonia as the primary gas is promising, delivering both ecological and economic benefits. However, when employing this combustion mode, the substitution of ammonia at high rates adversely affects combustion performance and leads to elevated emissions of unburnt NH3, especially under high loads, which requires special treatment of this mode of combustion. This study aims to simulate combustion in a common rail direct injection (CRDI) dual-fuel engine, considering the fundamental geometry of the combustion chamber as well as fifteen (15) alternative proposed geometries to determine the configuration that exhibits superior engine performance during high-load conditions. The research presented here focuses on improving the understanding of the equations and mechanisms involved in the combustion of finely atomized jets of liquid fuel and on mastering the CONVERGETM code, which facilitates the simulation of this combustion process. By analyzing the effect of piston bowl shape on the performance and emissions of a diesel engine operating in dual fuel mode, this work combines knowledge of combustion phenomena with proficiency in the calculation code. To select the optimal geometry, an evaluation of the Swirl, Tumble, and Squish flow patterns was conducted for the fifteen (15) studied geometries. Variations in-cylinder pressure, heat release rate, turbulence kinetic energy, turbulence dissipation rate, and emission rates were observed, while thermal efficiency and specific fuel consumption were estimated as functions of crankshaft angle. To maximize thermal efficiency, a synergistic approach involving the enrichment of intake air with oxygen (O2) and the enrichment of primary fuel with hydrogen (H2) was implemented. Based on the results obtained, it is worth noting that the proposed geometry (T8_b8_d0.6/SW_8.0) outperformed the others in terms of flow quality, reduction of pollutants emitted with a reduction of more than 90% in unburnt NH3, and an impressive improvement in engine efficiency of more than 11%.Keywords: ammonia, hydrogen, combustion, dual-fuel engine, emissions
Procedia PDF Downloads 74173 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 69172 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities
Authors: Riofrio Jonathan, Fernandez Guillermo
Abstract:
Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility
Procedia PDF Downloads 151171 Athletics and Academics: A Mixed Methods Enquiry on University/College Student Athletes' Experiences
Authors: Tshepang Tshube
Abstract:
The primary purpose of this study was to examine student-athletes’ experiences, particularly an in-depth account of balancing school and sport. The secondary objective was to assess student-athletes’ susceptibility to the effects of the “dumb-jock” stereotype threat and also determine the strength of athletic and academic identity as predicated by the extent to which stereotype is perceived by student-athletes. Sub-objectives are (a) examine support structures available for student-athletes in their respective academic institutions, (b) to establish the most effective ways to address student-athletes’ learning needs, (c) to establish crucial entourage members who play a pivotal role in student-athletes’ academic pursuits, (d) and unique and effective ways lecturers and coaches can contribute to student-athletes’ learning experiences. To achieve the above stated objectives, the study used a mixed methods approach. A total of 110 student-athletes from colleges and universities in Botswana completed an online survey that was followed by semi-structured interviews with eight student-athletes, and four coaches. The online survey assessed student-athletes’ demographic variables, measured athletic (AIMS), academic (modified from AIMS) identities, and perceived stereotype threat. Student-athletes reported a slightly higher academic identity (M=5.9, SD= .85) compared to athletic identity (M=5.4, SD=1.0). Student-athletes reported a moderate mean (M=3.6, SD=.82) just above the midpoint of the 7-point scale for stereotype threat. A univariate ANOVA was conducted to determine if there was any significant difference between university and college brackets in Botswana with regard to three variables: athletic identity, student identity and stereotype threat. The only significant difference was in the academic identity (Post Hoc-Tukey Student Identity: Bracket A < Bracket B, Bracket C) with Bracket A schools being the least athletically competitive. Bracket C and B are the most athletically competitive brackets in Botswana. Follow-up interviews with student-athletes and coaches were conducted. All interviews lasted an average of 55 minutes. Following all the interviews, all recordings were transcribed which is an obvious first step in qualitative data analysis process. The researcher and an independent academic with experience in qualitative research independently listened to all recordings of the interviews and read the transcripts several times. Qualitative data results indicate that even though student-athletes reported a slightly higher student identity, there are parallels between sports and academic structures on college campuses. Results also provide evidence of lack of academic support for student-athletes. It is therefore crucial for student-athletes to have access to academic support services (e.g., tutoring, flexible study times, and reduced academic loads) to meet their academic needs. Coaches and lecturers play a fundamental role in sporting student-athletes. Coaches and professors’ academic efficacy on student-athletes enhances student-athletes’ academic confidence. Results are discussed within the stereotype threat theory.Keywords: athletic identity, colligiate sport, sterotype threat, student athletes
Procedia PDF Downloads 462170 Testing of Infill Walls with Joint Reinforcement Subjected to in Plane Lateral Load
Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas
Abstract:
The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frame subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed-joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. A confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame and the aspect ratio of the wall. All cases included tie-columns and tie-beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frame with identical characteristic to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls: this type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.Keywords: experimental study, Infill wall, Infilled frame, masonry wall
Procedia PDF Downloads 77169 Luminescent Dye-Doped Polymer Nanofibers Produced by Electrospinning Technique
Authors: Monica Enculescu, A. Evanghelidis, I. Enculescu
Abstract:
Among the numerous methods for obtaining polymer nanofibers, the electrospinning technique distinguishes itself due to the more growing interest induced by its proved utility leading to developing and improving of the method and the appearance of novel materials. In particular, production of polymeric nanofibers in which different dopants are introduced was intensively studied in the last years because of the increased interest for the obtaining of functional electrospun nanofibers. Electrospinning is a facile method of obtaining polymer nanofibers with diameters from tens of nanometers to micrometrical sizes that are cheap, flexible, scalable, functional and biocompatible. Besides the multiple applications in medicine, polymeric nanofibers obtained by electrospinning permit manipulation of light at nanometric dimensions when doped with organic dyes or different nanoparticles. It is a simple technique that uses an electrical field to draw fine polymer nanofibers from solutions and does not require complicated devices or high temperatures. Different morphologies of the electrospun nanofibers can be obtained for the same polymeric host when different parameters of the electrospinning process are used. Consequently, we can obtain tuneable optical properties of the electrospun nanofibers (e.g. changing the wavelength of the emission peak) by varying the parameters of the fabrication method. We focus on obtaining doped polymer nanofibers with enhanced optical properties using the electrospinning technique. The aim of the paper is to produce dye-doped polymer nanofibers’ mats incorporating uniformly dispersed dyes. Transmission and fluorescence of the fibers will be evaluated by spectroscopy methods. The morphological properties of the electrospun dye-doped polymer fibers will be evaluated using scanning electron microscopy (SEM). We will tailor the luminescent properties of the material by doping the polymer (polyvinylpyrrolidone or polymethylmetacrilate) with different dyes (coumarins, rhodamines and sulforhodamines). The tailoring will be made taking into consideration the possibility of changing the luminescent properties of electrospun polymeric nanofibers that are doped with different dyes by using different parameters for the electrospinning technique (electric voltage, distance between electrodes, flow rate of the solution, etc.). Furthermore, we can evaluated the influence of the concentration of the dyes on the emissive properties of dye-doped polymer nanofibers using different concentrations. The advantages offered by the electrospinning technique when producing polymeric fibers are given by the simplicity of the method, the tunability of the morphology allowed by the possibility of controlling all the process parameters (temperature, viscosity of polymeric solution, applied voltage, distance between electrodes, etc.), and by the absence of necessity of using harsh and supplementary chemicals such as the ones used in the traditional nanofabrication techniques. Acknowledgments: The authors acknowledge the financial support received through IFA CEA Project No. C5-08/2016.Keywords: electrospinning, luminescence, polymer nanofibers, scanning electron microscopy
Procedia PDF Downloads 212168 Design Approach to Incorporate Unique Performance Characteristics of Special Concrete
Authors: Devendra Kumar Pandey, Debabrata Chakraborty
Abstract:
The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.Keywords: high performance concrete, special concrete, structural design, structural lightweight concrete
Procedia PDF Downloads 305