Search results for: velocity vectors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1828

Search results for: velocity vectors

298 Risk Management and Resiliency: Evaluating Walmart’s Global Supply Chain Leadership Using the Supply Chain Resilience Assessment and Management Framework

Authors: Meghan Biallas, Amanda Hoffman, Tamara Miller, Kimmy Schnibben, Janaina Siegler

Abstract:

This paper assesses Walmart’s supply chain resiliency amidst continuous supply chain disruptions. It aims to evaluate how Walmart can use supply chain resiliency theory to retain its status as a global supply chain leader. The Bloomberg terminal was used to organize Walmart’s 754 Tier-1 suppliers by the size of their relationship to Walmart. Additional data from IBISWorld and Statista was also used in the analysis. This research focused on the top ten Tier-1 suppliers, with the greatest percentage of their revenue attributed to Walmart. This paper also applied the firm’s information to the Supply Chain Resilience Assessment and Management (SCRAM) framework for supply chain resiliency to evaluate the firm’s capabilities, vulnerabilities, and gaps. A rubric was created to quantify Walmart’s risks using four pillars: flexibility, velocity, visibility, and collaboration. Information and examples were reported from Walmart’s 10k filing. For each example, a rating of 1 indicated “high” resiliency, 0 indicated “medium” resiliency, and -1 indicated “low” resiliency. Findings from this study include the following: (1) Walmart has maintained its leadership through its ability to remain resilient with regard to visibility, efficiency, capacity, and collaboration. (2) Walmart is experiencing increases in supply chain costs due to internal factors affecting the company and external factors affecting its suppliers. (3) There are a number of emerging supply chain risks with Walmart’s suppliers, which could cause issues for Walmart to remain a supply chain leader in the future. Using the SCRAM framework, this paper assesses how Walmart measures up to the Supply Chain Resiliency Theory, identifying areas of strength as well as areas where Walmart can improve in order to remain a global supply chain leader.

Keywords: supply chain resiliency, zone of balanced resilience, supply chain resilience assessment and management, supply chain theory.

Procedia PDF Downloads 95
297 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet

Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec

Abstract:

Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.

Keywords: offset jet, offset ratio, numerical simulation, RSM

Procedia PDF Downloads 289
296 Scientific and Regulatory Challenges of Advanced Therapy Medicinal Products

Authors: Alaa Abdellatif, Gabrièle Breda

Abstract:

Background. Advanced therapy medicinal products (ATMPs) are innovative therapies that mainly target orphan diseases and high unmet medical needs. ATMP includes gene therapy medicinal products (GTMP), somatic cell therapy medicinal products (CTMP), and tissue-engineered therapies (TEP). Since legislation opened the way in 2007, 25 ATMPs have been approved in the EU, which is about the same amount as the U.S. Food and Drug Administration. However, not all of the ATMPs that have been approved have successfully reached the market and retained their approval. Objectives. We aim to understand all the factors limiting the market access to very promising therapies in a systemic approach, to be able to overcome these problems, in the future, with scientific, regulatory and commercial innovations. Further to recent reviews that focus either on specific countries, products, or dimensions, we will address all the challenges faced by ATMP development today. Methodology. We used mixed methods and a multi-level approach for data collection. First, we performed an updated academic literature review on ATMP development and their scientific and market access challenges (papers published between 2018 and April 2023). Second, we analyzed industry feedback from cell and gene therapy webinars and white papers published by providers and pharmaceutical industries. Finally, we established a comparative analysis of the regulatory guidelines published by EMA and the FDA for ATMP approval. Results: The main challenges in bringing these therapies to market are the high development costs. Developing ATMPs is expensive due to the need for specialized manufacturing processes. Furthermore, the regulatory pathways for ATMPs are often complex and can vary between countries, making it challenging to obtain approval and ensure compliance with different regulations. As a result of the high costs associated with ATMPs, challenges in obtaining reimbursement from healthcare payers lead to limited patient access to these treatments. ATMPs are often developed for orphan diseases, which means that the patient population is limited for clinical trials which can make it challenging to demonstrate their safety and efficacy. In addition, the complex manufacturing processes required for ATMPs can make it challenging to scale up production to meet demand, which can limit their availability and increase costs. Finally, ATMPs face safety and efficacy challenges: dangerous adverse events of these therapies like toxicity related to the use of viral vectors or cell therapy, starting material and donor-related aspects. Conclusion. As a result of our mixed method analysis, we found that ATMPs face a number of challenges in their development, regulatory approval, and commercialization and that addressing these challenges requires collaboration between industry, regulators, healthcare providers, and patient groups. This first analysis will help us to address, for each challenge, proper and innovative solution(s) in order to increase the number of ATMPs approved and reach the patients

Keywords: advanced therapy medicinal products (ATMPs), product development, market access, innovation

Procedia PDF Downloads 58
295 Fluvial Stage-Discharge Rating of a Selected Reach of Jamuna River

Authors: Makduma Zahan Badhan, M. Abdul Matin

Abstract:

A study has been undertaken to develop a fluvial stage-discharge rating curve for Jamuna River. Past Cross-sectional survey of Jamuna River reach within Sirajgonj and Tangail has been analyzed. The analysis includes the estimation of discharge carrying capacity, possible maximum scour depth and sediment transport capacity of the selected reaches. To predict the discharge and sediment carrying capacity, stream flow data which include cross-sectional area, top width, water surface slope and median diameter of the bed material of selected stations have been collected and some are calculated from reduced level data. A well-known resistance equation has been adopted and modified to a simple form in order to be used in the present analysis. The modified resistance equation has been used to calculate the mean velocity through the channel sections. In addition, a sediment transport equation has been applied for the prediction of transport capacity of the various sections. Results show that the existing drainage sections of Jamuna channel reach under study have adequate carrying capacity under existing bank-full conditions, but these reaches are subject to bed erosion even in low flow situations. Regarding sediment transport rate, it can be estimated that the channel flow has a relatively high range of bed material concentration. Finally, stage­ discharge curves for various sections have been developed. Based on stage-discharge rating data of various sections, water surface profile and sediment-rating curve of Jamuna River have been developed and also the flooding conditions have been analyzed from predicted water surface profile.

Keywords: discharge rating, flow profile, fluvial, sediment rating

Procedia PDF Downloads 170
294 Characterization of Atmospheric Aerosols by Developing a Cascade Impactor

Authors: Sapan Bhatnagar

Abstract:

Micron size particles emitted from different sources and produced by combustion have serious negative effects on human health and environment. They can penetrate deep into our lungs through the respiratory system. Determination of the amount of particulates present in the atmosphere per cubic meter is necessary to monitor, regulate and model atmospheric particulate levels. Cascade impactor is used to collect the atmospheric particulates and by gravimetric analysis, their concentration in the atmosphere of different size ranges can be determined. Cascade impactors have been used for the classification of particles by aerodynamic size. They operate on the principle of inertial impaction. It consists of a number of stages each having an impaction plate and a nozzle. Collection plates are connected in series with smaller and smaller cutoff diameter. Air stream passes through the nozzle and the plates. Particles in the stream having large enough inertia impact upon the plate and smaller particles pass onto the next stage. By designing each successive stage with higher air stream velocity in the nozzle, smaller diameter particles will be collected at each stage. Particles too small to be impacted on the last collection plate will be collected on a backup filter. Impactor consists of 4 stages each made of steel, having its cut-off diameters less than 10 microns. Each stage is having collection plates, soaked with oil to prevent bounce and allows the impactor to function at high mass concentrations. Even after the plate is coated with particles, the incoming particle will still have a wet surface which significantly reduces particle bounce. The particles that are too small to be impacted on the last collection plate are then collected on a backup filter (microglass fiber filter), fibers provide larger surface area to which particles may adhere and voids in filter media aid in reducing particle re-entrainment.

Keywords: aerodynamic diameter, cascade, environment, particulates, re-entrainment

Procedia PDF Downloads 308
293 Computational Feasibility Study of a Torsional Wave Transducer for Tissue Stiffness Monitoring

Authors: Rafael Muñoz, Juan Melchor, Alicia Valera, Laura Peralta, Guillermo Rus

Abstract:

A torsional piezoelectric ultrasonic transducer design is proposed to measure shear moduli in soft tissue with direct access availability, using shear wave elastography technique. The measurement of shear moduli of tissues is a challenging problem, mainly derived from a) the difficulty of isolating a pure shear wave, given the interference of multiple waves of different types (P, S, even guided) emitted by the transducers and reflected in geometric boundaries, and b) the highly attenuating nature of soft tissular materials. An immediate application, overcoming these drawbacks, is the measurement of changes in cervix stiffness to estimate the gestational age at delivery. The design has been optimized using a finite element model (FEM) and a semi-analytical estimator of the probability of detection (POD) to determine a suitable geometry, materials and generated waves. The technique is based on the time of flight measurement between emitter and receiver, to infer shear wave velocity. Current research is centered in prototype testing and validation. The geometric optimization of the transducer was able to annihilate the compressional wave emission, generating a quite pure shear torsional wave. Currently, mechanical and electromagnetic coupling between emitter and receiver signals are being the research focus. Conclusions: the design overcomes the main described problems. The almost pure shear torsional wave along with the short time of flight avoids the possibility of multiple wave interference. This short propagation distance reduce the effect of attenuation, and allow the emission of very low energies assuring a good biological security for human use.

Keywords: cervix ripening, preterm birth, shear modulus, shear wave elastography, soft tissue, torsional wave

Procedia PDF Downloads 334
292 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics

Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo

Abstract:

The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.

Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing

Procedia PDF Downloads 120
291 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 157
290 Sensitivity Analysis and Solitary Wave Solutions to the (2+1)-Dimensional Boussinesq Equation in Dispersive Media

Authors: Naila Nasreen, Dianchen Lu

Abstract:

This paper explores the dynamical behavior of the (2+1)-dimensional Boussinesq equation, which is a nonlinear water wave equation and is used to model wave packets in dispersive media with weak nonlinearity. This equation depicts how long wave made in shallow water propagates due to the influence of gravity. The (2+1)- dimensional Boussinesq equation combines the two-way propagation of the classical Boussinesq equation with the dependence on a second spatial variable, as that occurs in the two-dimensional Kadomstev- Petviashvili equation. This equation provides a description of head- on collision of oblique waves and it possesses some interesting properties. The governing model is discussed by the assistance of Ricatti equation mapping method, a relatively integration tool. The solutions have been extracted in different forms the solitary wave solutions as well as hyperbolic and periodic solutions. Moreover, the sensitivity analysis is demonstrated for the designed dynamical structural system’s wave profiles, where the soliton wave velocity and wave number parameters regulate the water wave singularity. In addition to being helpful for elucidating nonlinear partial differential equations, the method in use gives previously extracted solutions and extracts fresh exact solutions. Assuming the right values for the parameters, various graph in different shapes are sketched to provide information about the visual format of the earned results. This paper’s findings support the efficacy of the approach taken in enhancing nonlinear dynamical behavior. We believe this research will be of interest to a wide variety of engineers that work with engineering models. Findings show the effectiveness simplicity, and generalizability of the chosen computational approach, even when applied to complicated systems in a variety of fields, especially in ocean engineering.

Keywords: (2+1)-dimensional Boussinesq equation, solitary wave solutions, Ricatti equation mapping approach, nonlinear phenomena

Procedia PDF Downloads 72
289 The Effects of Impact Forces and Kinematics of Two Different Stance Position at Straight Punch Techniques in Boxing

Authors: Bergun Meric Bingul, Cigdem Bulgan, Ozlem Tore, Mensure Aydin, Erdal Bal

Abstract:

The aim of the study was to compare the effects of impact forces and some kinematic parameters with two different straight punch stance positions in boxing. 9 elite boxing athletes from the Turkish National Team (mean age± SD 19.33±2.11 years, mean height 174.22±3.79 cm, mean weight 66.0±6.62 kg) participated in this study as voluntarily. Boxing athletes performed one trial in straight punch technique for each two different stance positions (orthodox and southpaw stances) at sandbag. The trials were recorded at a frequency of 120Hz using eight synchronized high-speed cameras (Oqus 7+), which were placed, approximately at right- angles to one another. The three-dimensional motion analysis was performed with a Motion Capture System (Qualisys, Sweden). Data was transferred to Windows-based data acquisition software, which was QTM (Qualisys Track Manager). 11 segment models were used for determination of the kinematic variables (Calf, leg, punch, upperarm, lowerarm, trunk). Also, the sandbag was markered for calculation of the impact forces. Wand calibration method (with T stick) was used for field calibration. The mean velocity and acceleration of the punch; mean acceleration of the sandbag and angles of the trunk, shoulder, hip and knee were calculated. Stance differences’ data were compared with Wilcoxon test for using SPSS 20.0 program. According to the results, there were statistically significant differences found in trunk angle on the sagittal plane (yz) (p<0.05). There was a significant difference also found in sandbag acceleration and impact forces between stance positions (p < 0.05). Boxing athletes achieved more impact forces and accelerations in orthodox stance position. It is recommended that to use an orthodox stance instead of southpaw stance in straight punch technique especially for creating more impact forces.

Keywords: boxing, impact force, kinematics, straight punch, orthodox, southpaw

Procedia PDF Downloads 301
288 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 147
287 Experimental Validation of Computational Fluid Dynamics Used for Pharyngeal Flow Patterns during Obstructive Sleep Apnea

Authors: Pragathi Gurumurthy, Christina Hagen, Patricia Ulloa, Martin A. Koch, Thorsten M. Buzug

Abstract:

Obstructive sleep apnea (OSA) is a sleep disorder where the patient suffers a disturbed airflow during sleep due to partial or complete occlusion of the pharyngeal airway. Recently, numerical simulations have been used to better understand the mechanism of pharyngeal collapse. However, to gain confidence in the solutions so obtained, an experimental validation is required. Therefore, in this study an experimental validation of computational fluid dynamics (CFD) used for the study of human pharyngeal flow patterns during OSA is performed. A stationary incompressible Navier-Stokes equation solved using the finite element method was used to numerically study the flow patterns in a computed tomography-based human pharynx model. The inlet flow rate was set to 250 ml/s and such that a flat profile was maintained at the inlet. The outlet pressure was set to 0 Pa. The experimental technique used for the validation of CFD of fluid flow patterns is phase contrast-MRI (PC-MRI). Using the same computed tomography data of the human pharynx as in the simulations, a phantom for the experiment was 3 D printed. Glycerol (55.27% weight) in water was used as a test fluid at 25°C. Inflow conditions similar to the CFD study were simulated using an MRI compatible flow pump (CardioFlow-5000MR, Shelley Medical Imaging Technologies). The entire experiment was done on a 3 T MR system (Ingenia, Philips) with 108 channel body coil using an RF-spoiled, gradient echo sequence. A comparison of the axial velocity obtained in the pharynx from the numerical simulations and PC-MRI shows good agreement. The region of jet impingement and recirculation also coincide, therefore validating the numerical simulations. Hence, the experimental validation proves the reliability and correctness of the numerical simulations.

Keywords: computational fluid dynamics, experimental validation, phase contrast-MRI, obstructive sleep apnea

Procedia PDF Downloads 298
286 Computational Fluid Dynamics Model of Various Types of Rocket Engine Nozzles

Authors: Konrad Pietrykowski, Michal Bialy, Pawel Karpinski, Radoslaw Maczka

Abstract:

The nozzle is an element of the rocket engine in which the conversion of the potential energy of gases generated during combustion into the kinetic energy of the gas stream takes place. The design parameters of the nozzle have a decisive influence on the ballistic characteristics of the engine. Designing a nozzle assembly is, therefore, one of the most responsible stages in developing a rocket engine design. The paper presents the results of the simulation of three types of rocket propulsion nozzles. Calculations were made using CFD (Computational Fluid Dynamics) in ANSYS Fluent software. The next types of nozzles differ in shape. The analysis was made of a conical nozzle, a bell type nozzle with a conical supersonic part and a bell type nozzle. Calculation results are presented in the form of pressure, velocity and kinetic energy distributions of turbulence in the longitudinal section. The courses of these values along the nozzles are also presented. The results show that the cone nozzle generates strong turbulence in the critical section. Which negatively affect the flow of the working medium. In the case of a bell nozzle, the transformation of the wall caused the elimination of flow disturbances in the critical section. This reduces the probability of waves forming before or after the trailing edge. The most sophisticated construction is the bell type nozzle. It allows you to maximize performance without adding extra weight. The bell type nozzle can be used as a starter and auxiliary engine nozzle due to its advantages. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: computational fluid dynamics, nozzle, rocket engine, supersonic flow

Procedia PDF Downloads 140
285 Ultra-High Voltage Energization of Electrostatic Precipitators for Coal Fired Boilers

Authors: Mads Kirk Larsen

Abstract:

Strict air pollution control is today high on the agenda world-wide. By reducing the particular emission, not only the mg/Nm3 will be reduced – also parts of mercury and other hazardous matters attached to the particles will be reduced. Furthermore, it is possible to catch the fine particles (PM2.5). For particulate control, the precipitators are still the preferred choice and much efforts have been done to improve the efficiencies. Many ESP’s have seen electrical upgrading by changing the traditional 1 phase power system into either 3 phase or SMPS (High Frequency) units. However, there exist a 4th type of power supply – the pulse type. This is unfortunately widely unknown, but may be of great benefit to power plants. The FLSmidth type is called COROMAX® and it is a high voltage pulse generator for precipitators using a semiconductor switch operating at medium potential. The generated high voltage pulses have rated amplitude of 80 kV and duration of 75 μs and are superimposed on a variable base voltage of 60 kV rated voltage. Hereby, achieving a peak voltage of 140 kV. COROMAX® has the ability to increase the voltage beyond the natural spark limit inside the precipitator. Voltage levels may often be twice as high after installation of COROMAX®. Hereby also the migration velocity increases and thereby the efficiency. As the collection efficiency is proportional to the voltage peak and mean values, this also increases the collection efficiency of the fine particles where test has shown 80% removal of particles less than 0.07 micron. Another great advantage is the indifference to back-corona. Simultaneously with emission reduction, the power consumption will also be reduced. Another great advantage of the COROMAX® system is that the emission can be improved without the need to change the internal parts or enlarge the ESP. Recently, more than 150 units have been installed in China, where emissions have been reduced to ultra-low levels.

Keywords: eleectrostatic precipitator, high resistivity dust, micropulse energization, particulate removal

Procedia PDF Downloads 283
284 Analysis of Aerodynamic Forces Acting on a Train Passing Through a Tornado

Authors: Masahiro Suzuki, Nobuyuki Okura

Abstract:

The crosswind effect on ground transportations has been extensively investigated for decades. The effect of tornado, however, has been hardly studied in spite of the fact that even heavy ground vehicles, namely, trains were overturned by tornadoes with casualties in the past. Therefore, aerodynamic effects of the tornado on the train were studied by several approaches in this study. First, an experimental facility was developed to clarify aerodynamic forces acting on a vehicle running through a tornado. Our experimental set-up consists of two apparatus. One is a tornado simulator, and the other is a moving model rig. PIV measurements showed that the tornado simulator can generate a swirling-flow field similar to those of the natural tornadoes. The flow field has the maximum tangential velocity of 7.4 m/s and the vortex core radius of 96 mm. The moving model rig makes a 1/40 scale model train of single-car/three-car unit run thorough the swirling flow with the maximum speed of 4.3 m/s. The model car has 72 pressure ports on its surface to estimate the aerodynamic forces. The experimental results show that the aerodynamic forces vary its magnitude and direction depends on the location of the vehicle in the flow field. Second, the aerodynamic forces on the train were estimated by using Rankin vortex model. The Rankin vortex model is a simple tornado model which widely used in the field of civil engineering. The estimated aerodynamic forces on the middle car were fairly good agreement with the experimental results. Effects of the vortex core radius and the path of the train on the aerodynamic forces were investigated using the Rankin vortex model. The results shows that the side and lift forces increases as the vortex core radius increases, while the yawing moment is maximum when the core radius is 0.3875 times of the car length. Third, a computational simulation was conducted to clarify the flow field around the train. The simulated results qualitatively agreed with the experimental ones.

Keywords: aerodynamic force, experimental method, tornado, train

Procedia PDF Downloads 220
283 A Laboratory Study into the Effects of Surface Waves on Freestyle Swimming

Authors: Scott Draper, Nat Benjanuvatra, Grant Landers, Terry Griffiths, Justin Geldard

Abstract:

Open water swimming has been an Olympic sport since 2008 and is growing in popularity world-wide as a low impact form of exercise. Unlike pool swimming, open water swimmers experience a range of different environmental conditions, including surface waves, variable water temperature, aquatic life, and ocean currents. This presentation will describe experimental research to investigate how freestyle swimming behaviour and performance is influenced by surface waves. A group of 12 swimmers were instructed to swim freestyle in the 54 m long wave flume located at The University of Western Australia’s Coastal and Offshore Engineering Laboratory. A variety of different regular waves were simulated, varying in height (up to 0.3 m), period (1.25 – 4s), and direction (with or against the swimmer). Swimmer’s velocity and acceleration, respectively, were determined from video recording and inertial sensors attached to five different parts of the swimmer’s body. The results illustrate how the swimmers stroke rate and the wave encounter frequency influence their forward speed and how particular wave conditions can benefit or hinder performance. Comparisons to simplified mathematical models provide insight into several aspects of performance, including: (i) how much faster swimmers can travel when swimming with as opposed to against the waves, and (ii) why swimmers of lesser ability are expected to be affected proportionally more by waves than elite swimmers. These findings have implications across the spectrum from elite to ‘weekend’ swimmers, including how they are coached and their ability to win (or just successfully complete) iconic open water events such as the Rottnest Channel Swim held annually in Western Australia.

Keywords: open water, surface waves, wave height/length, wave flume, stroke rate

Procedia PDF Downloads 96
282 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound

Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo

Abstract:

Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.

Keywords: mitral regurgitation, vena contracta, color doppler, image processing

Procedia PDF Downloads 356
281 Studying the Evolution of Soot and Precursors in Turbulent Flames Using Laser Diagnostics

Authors: Muhammad A. Ashraf, Scott Steinmetz, Matthew J. Dunn, Assaad R. Masri

Abstract:

This study focuses on the evolution of soot and soot precursors in three different piloted diffusion turbulent flames. The fuel composition is as follow flame A (ethylene/nitrogen, 2:3 by volume), flame B (ethylene/air, 2:3 by volume), and flame C (pure methane). These flames are stabilized using a 4mm diameter jet surrounded by a pilot annulus with an outer diameter of 15 mm. The pilot issues combustion products from stoichiometric premixed flames of hydrogen, acetylene, and air. In all cases, the jet Reynolds number is 10,000, and air flows in the coflow stream at a velocity of 5 m/s. Time-resolved laser-induced fluorescence (LIF) is collected at two wavelength bands in the visible (445 nm) and UV regions (266 nm) along with laser-induced incandescence (LII). The combined results are employed to study concentration, size, and growth of soot and precursors. A set of four fast photo-multiplier tubes are used to record emission data in temporal domain. A 266nm laser pulse preferentially excites smaller nanoparticles which emit a fluorescence spectrum which is analysed to track the presence, evolution, and destruction of nanoparticles. A 1064nm laser pulse excites sufficiently large soot particles, and the resulting incandescence is collected at 1064nm. At downstream and outer radial locations, intermittency becomes a relevant factor. Therefore, data collected in turbulent flames is conditioned to account for intermittency so that the resulting mean profiles for scattering, fluorescence, and incandescence are shown for the events that contain traces of soot. It is found that in the upstream regions of the ethylene-air and ethylene-nitrogen flames, the presence of soot precursors is rather similar. However, further downstream, soot concentration grows larger in the ethylene-air flames.

Keywords: laser induced incandescence, laser induced fluorescence, soot, nanoparticles

Procedia PDF Downloads 129
280 Design, Development and Analysis of Combined Darrieus and Savonius Wind Turbine

Authors: Ashish Bhattarai, Bishnu Bhatta, Hem Raj Joshi, Nabin Neupane, Pankaj Yadav

Abstract:

This report concerns the design, development, and analysis of the combined Darrieus and Savonius wind turbine. Vertical Axis Wind Turbines (VAWT's) are of two type's viz. Darrieus (lift type) and Savonius (drag type). The problem associated with Darrieus is the lack of self-starting while Savonius has low efficiency. There are 3 straight Darrieus blades having the cross-section of NACA(National Advisory Committee of Aeronautics) 0018 placed circumferentially and a helically twisted Savonius blade to get even torque distribution. This unique design allows the use of Savonius as a method of self-starting the wind turbine, which the Darrieus cannot achieve on its own. All the parts of the wind turbine are designed in CAD software, and simulation data were obtained via CFD(Computational Fluid Dynamics) approach. Also, the design was imported to FlashForge Finder to 3D print the wind turbine profile and finally, testing was carried out. The plastic material used for Savonius was ABS(Acrylonitrile Butadiene Styrene) and that for Darrieus was PLA(Polylactic Acid). From the data obtained experimentally, the hybrid VAWT so fabricated has been found to operate at the low cut-in speed of 3 m/s and maximum power output has been found to be 7.5537 watts at the wind speed of 6 m/s. The maximum rpm of the rotor blade is recorded to be 431 rpm(rotation per minute) at the wind velocity of 6 m/s, signifying its potentiality of wind power production. Besides, the data so obtained from both the process when analyzed through graph plots has shown the similar nature slope wise. Also, the difference between the experimental and theoretical data obtained has shown mechanical losses. The objective is to eliminate the need for external motors for self-starting purposes and study the performance of the model. The testing of the model was carried out for different wind velocities.

Keywords: VAWT, Darrieus, Savonius, helical blades, CFD, flash forge finder, ABS, PLA

Procedia PDF Downloads 187
279 Modeling and Simulation of Turbulence Induced in Nozzle Cavitation and Its Effects on Internal Flow in a High Torque Low Speed Diesel Engine

Authors: Ali Javaid, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

To control combustion inside a direct injection diesel engine, fuel atomization is the best tool. Controlling combustion helps in reducing emissions and improves efficiency. Cavitation is one of the most important factors that significantly affect the nature of spray before it injects into combustion chamber. Typical fuel injector nozzles are small and operate at a very high pressure, which limits the study of internal nozzle behavior especially in case of diesel engine. Simulating cavitation in a fuel injector will help in understanding the phenomenon and will assist in further development. There is a parametric variation between high speed and high torque low speed diesel engines. The objective of this study is to simulate internal spray characteristics for a low speed high torque diesel engine. In-nozzle cavitation has strong effects on the parameters e.g. mass flow rate, fuel velocity, and momentum flux of fuel that is to be injected into the combustion chamber. The external spray dynamics and subsequently the air – fuel mixing depends on a lot of the parameters of fuel injecting the nozzle. The approach used to model turbulence induced in – nozzle cavitation for high-torque low-speed diesel engine, is homogeneous equilibrium model. The governing equations were modeled using Matlab. Complete Model in question was extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver and implemented in CFD (Computational Fluid Dynamics). Results thus obtained will be analyzed for better evaporation in the near-nozzle region. The proposed analyses will further help in better engine efficiency, low emission, and improved fuel economy.

Keywords: cavitation, HEM model, nozzle flow, open foam, turbulence

Procedia PDF Downloads 269
278 Brain-Computer Interface System for Lower Extremity Rehabilitation of Chronic Stroke Patients

Authors: Marc Sebastián-Romagosa, Woosang Cho, Rupert Ortner, Christy Li, Christoph Guger

Abstract:

Neurorehabilitation based on Brain-Computer Interfaces (BCIs) shows important rehabilitation effects for patients after stroke. Previous studies have shown improvements for patients that are in a chronic stage and/or have severe hemiparesis and are particularly challenging for conventional rehabilitation techniques. For this publication, seven stroke patients in the chronic phase with hemiparesis in the lower extremity were recruited. All of them participated in 25 BCI sessions about 3 times a week. The BCI system was based on the Motor Imagery (MI) of the paretic ankle dorsiflexion and healthy wrist dorsiflexion with Functional Electrical Stimulation (FES) and avatar feedback. Assessments were conducted to assess the changes in motor improvement before, after and during the rehabilitation training. Our primary measures used for the assessment were the 10-meters walking test (10MWT), Range of Motion (ROM) of the ankle dorsiflexion and Timed Up and Go (TUG). Results show a significant increase in the gait speed in the primary measure 10MWT fast velocity of 0.18 m/s IQR = [0.12 to 0.2], P = 0.016. The speed in the TUG was also significantly increased by 0.1 m/s IQR = [0.09 to 0.11], P = 0.031. The active ROM assessment increased 4.65º, and IQR = [ 1.67 - 7.4], after rehabilitation training, P = 0.029. These functional improvements persisted at least one month after the end of the therapy. These outcomes show the feasibility of this BCI approach for chronic stroke patients and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tools for stroke patients. However, the results are from only seven chronic stroke patients, so the authors believe that this approach should be further validated in broader randomized controlled studies involving more patients. MI and FES-based non-invasive BCIs are showing improvement in the gait rehabilitation of patients in the chronic stage after stroke. This could have an impact on the rehabilitation techniques used for these patients, especially when they are severely impaired and their mobility is limited.

Keywords: neuroscience, brain computer interfaces, rehabilitat, stroke

Procedia PDF Downloads 78
277 Ulnar Nerve Changes Associated with Carpal Tunnel Syndrome and Effect on Median Ersus Ulnar Comparative Studies

Authors: Emmanuel K. Aziz Saba, Sarah S. El-Tawab

Abstract:

Objectives: Carpal tunnel syndrome (CTS) was found to be associated with high pressure within the Guyon’s canal. The aim of this study was to assess the involvement of sensory and/or motor ulnar nerve fibers in patients with CTS and whether this affects the accuracy of the median versus ulnar sensory and motor comparative tests. Patients and methods: The present study included 145 CTS hands and 71 asymptomatic control hands. Clinical examination was done for all patients. The following tests were done for the patients and control: (1) Sensory conduction studies: median nerve, ulnar nerve, dorsal ulnar cutaneous nerve and median versus ulnar digit (D) four sensory comparative study; (2) Motor conduction studies: median nerve, ulnar nerve and median versus ulnar motor comparative study. Results: There were no statistically significant differences between patients and control group as regards parameters of ulnar motor study and dorsal ulnar cutaneous sensory conduction study. It was found that 17 CTS hands (11.7%) had ulnar sensory abnormalities in 17 different patients. The median versus ulnar sensory and motor comparative studies were abnormal among all these 17 CTS hands. There were statistically significant negative correlations between median motor latency and both ulnar sensory amplitudes recording D5 and D4. There were statistically significant positive correlations between median sensory conduction velocity and both ulnar sensory nerve action potential amplitude recording D5 and D4. Conclusions: There is ulnar sensory nerve abnormality among CTS patients. This abnormality affects the amplitude of ulnar sensory nerve action potential. The presence of abnormalities in ulnar nerve occurs in moderate and severe degrees of CTS. This does not affect the median versus ulnar sensory and motor comparative tests accuracy and validity for use in electrophysiological diagnosis of CTS.

Keywords: carpal tunnel syndrome, ulnar nerve, median nerve, median versus ulnar comparative study, dorsal ulnar cutaneous nerve

Procedia PDF Downloads 548
276 Evaluate Effects of Different Curing Methods on Compressive Strength, Modulus of Elasticity and Durability of Concrete

Authors: Dhara Shah, Chandrakant Shah

Abstract:

Construction industry utilizes plenty of water in the name of curing. Looking at the present scenario, the days are not so far when all construction industries will have to switch over to an alternative-self curing system, not only to save water for sustainable development of the environment but also to promote indoor and outdoor construction activities even in water scarce areas. At the same time, curing is essential for the development of proper strength and durability. IS 456-2000 recommends a curing period of 7 days for ordinary Portland cement concrete, and 10 to 14 days for concrete prepared using mineral admixtures or blended cements. But, being the last act in the concreting operations, it is often neglected or not fully done. Consequently, the quality of hardened concrete suffers, more so, if the freshly laid concrete gets exposed to the environmental conditions of low humidity, high wind velocity and high ambient temperature. To avoid the adverse effects of neglected or insufficient curing, which is considered a universal phenomenon, concrete technologist and research scientists have come up with curing compounds. Concrete is said to be self-cured, if it is able to retain its water content to perform chemical reaction for the development of its strength. Curing compounds are liquids which are either incorporated in concrete or sprayed directly onto concrete surfaces and which then dry to form a relatively impermeable membrane that retards the loss of moisture from the concrete. They are an efficient and cost-effective means of curing concrete and may be applied to freshly placed concrete or that which has been partially cured by some other means. However, they may affect the bond between concrete and subsequent surface treatments. Special care in the choice of a suitable compound needs to be exercised in such circumstances. Curing compounds are generally formulated from wax emulsions, chlorinated rubbers, synthetic and natural resins, and from PVA emulsions. Their effectiveness varies quite widely, depending on the material and strength of the emulsion.

Keywords: curing methods, self-curing compound, compressive strength, modulus of elasticity, durability

Procedia PDF Downloads 313
275 Determination of the Phosphate Activated Glutaminase Localization in the Astrocyte Mitochondria Using Kinetic Approach

Authors: N. V. Kazmiruk, Y. R. Nartsissov

Abstract:

Phosphate activated glutaminase (GA, E.C. 3.5.1.2) plays a key role in glutamine/glutamate homeostasis in mammalian brain, catalyzing the hydrolytic deamidation of glutamine to glutamate and ammonium ions. GA is mainly localized in mitochondria, where it has the catalytically active form on the inner mitochondrial membrane (IMM) and the other soluble form, which is supposed to be dormant. At present time, the exact localization of the membrane glutaminase active site remains a controversial and an unresolved issue. The first hypothesis called c-side localization suggests that the catalytic site of GA faces the inter-membrane space and products of the deamidation reaction have immediate access to cytosolic metabolism. According to the alternative m-side localization hypothesis, GA orients to the matrix, making glutamate and ammonium available for the tricarboxylic acid cycle metabolism in mitochondria directly. In our study, we used a multi-compartment kinetic approach to simulate metabolism of glutamate and glutamine in the astrocytic cytosol and mitochondria. We used physiologically important ratio between the concentrations of glutamine inside the matrix of mitochondria [Glnₘᵢₜ] and glutamine in the cytosol [Glncyt] as a marker for precise functioning of the system. Since this ratio directly depends on the mitochondrial glutamine carrier (MGC) flow parameters, key observation was to investigate the dependence of the [Glnmit]/[Glncyt] ratio on the maximal velocity of MGC at different initial concentrations of mitochondrial glutamate. Another important task was to observe the similar dependence at different inhibition constants of the soluble GA. The simulation results confirmed the experimental c-side localization hypothesis, in which the glutaminase active site faces the outer surface of the IMM. Moreover, in the case of such localization of the enzyme, a 3-fold decrease in ammonium production was predicted.

Keywords: glutamate metabolism, glutaminase, kinetic approach, mitochondrial membrane, multi-compartment modeling

Procedia PDF Downloads 100
274 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP

Procedia PDF Downloads 75
273 Study of a Lean Premixed Combustor: A Thermo Acoustic Analysis

Authors: Minoo Ghasemzadeh, Rouzbeh Riazi, Shidvash Vakilipour, Alireza Ramezani

Abstract:

In this study, thermo acoustic oscillations of a lean premixed combustor has been investigated, and a mono-dimensional code was developed in this regard. The linearized equations of motion are solved for perturbations with time dependence〖 e〗^iwt. Two flame models were considered in this paper and the effect of mean flow and boundary conditions were also investigated. After manipulation of flame heat release equation together with the equations of flow perturbation within the main components of the combustor model (i.e., plenum/ premixed duct/ and combustion chamber) and by considering proper boundary conditions between the components of model, a system of eight homogeneous equations can be obtained. This simplification, for the main components of the combustor model, is convenient since low frequency acoustic waves are not affected by bends. Moreover, some elements in the combustor are smaller than the wavelength of propagated acoustic perturbations. A convection time is also assumed to characterize the required time for the acoustic velocity fluctuations to travel from the point of injection to the location of flame front in the combustion chamber. The influence of an extended flame model on the acoustic frequencies of combustor was also investigated, assuming the effect of flame speed as a function of equivalence ratio perturbation, on the rate of flame heat release. The abovementioned system of equations has a related eigenvalue equation which has complex roots. The sign of imaginary part of these roots determines whether the disturbances grow or decay and the real part of these roots would give the frequency of the modes. The results show a reasonable agreement between the predicted values of dominant frequencies in the present model and those calculated in previous related studies.

Keywords: combustion instability, dominant frequencies, flame speed, premixed combustor

Procedia PDF Downloads 367
272 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking

Procedia PDF Downloads 132
271 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 215
270 Field Study on Thermal Performance of a Green Office in Bangkok, Thailand: A Possibility of Increasing Temperature Set-Points

Authors: T. Sikram, M. Ichinose, R. Sasaki

Abstract:

In the tropics, indoor thermal environment is usually provided by a cooling mode to maintain comfort all year. Indoor thermal environment performance is sometimes different from the standard or from the first design process because of operation, maintenance, and utilization. The field study of thermal environment in the green building is still limited in this region, while the green building continues to increase. This study aims to clarify thermal performance and subjective perception in the green building by testing the temperature set-points. A Thai green office was investigated twice in October 2018 and in May 2019. Indoor environment variables (temperature, relative humidity, and wind velocity) were collected continuously. The temperature set-point was normally set as 23 °C, and it was changed into 24 °C and 25 °C. The study found that this gap of temperature set-point produced average room temperature from 22.7 to 24.6 °C and average relative humidity from 55% to 62%. Thermal environments slight shifted out of the ASHRAE comfort zone when the set-point was increased. Based on the thermal sensation vote, the feeling-colder vote decreased by 30% and 18% when changing +1 °C and +2 °C, respectively. Predicted mean vote (PMV) shows that most of the calculated median values were negative. The values went close to the optimal neutral value (0) when the set-point was set at 25 °C. The neutral temperature was slightly decreased when changing warmer temperature set-points. Building-related symptom reports were found in this study that the number of votes reduced continuously when the temperature was warmer. The symptoms that occurred by a cooler condition had the number of votes more than ones that occurred by a warmer condition. In sum, for this green office, there is a possibility to adjust a higher temperature set-point to +1 °C (24 °C) in terms of reducing cold sensitivity, discomfort, and symptoms. All results could support the policy of changing a warmer temperature of this office to become “a better green building”.

Keywords: thermal environment, green office, temperature set-point, comfort

Procedia PDF Downloads 103
269 Reinforcing Effects of Natural Micro-Particles on the Dynamic Impact Behaviour of Hybrid Bio-Composites Made of Short Kevlar Fibers Reinforced Thermoplastic Composite Armor

Authors: Edison E. Haro, Akindele G. Odeshi, Jerzy A. Szpunar

Abstract:

Hybrid bio-composites are developed for use in protective armor through positive hybridization offered by reinforcement of high-density polyethylene (HDPE) with Kevlar short fibers and palm wood micro-fillers. The manufacturing process involved a combination of extrusion and compression molding techniques. The mechanical behavior of Kevlar fiber reinforced HDPE with and without palm wood filler additions are compared. The effect of the weight fraction of the added palm wood micro-fillers is also determined. The Young modulus was found to increase as the weight fraction of organic micro-particles increased. However, the flexural strength decreased with increasing weight fraction of added micro-fillers. The interfacial interactions between the components were investigated using scanning electron microscopy. The influence of the size, random alignment and distribution of the natural micro-particles was evaluated. Ballistic impact and dynamic shock loading tests were performed to determine the optimum proportion of Kevlar short fibers and organic micro-fillers needed to improve impact strength of the HDPE. These results indicate a positive hybridization by deposition of organic micro-fillers on the surface of short Kevlar fibers used in reinforcing the thermoplastic matrix leading to enhancement of the mechanical strength and dynamic impact behavior of these materials. Therefore, these hybrid bio-composites can be promising materials for different applications against high velocity impacts.

Keywords: hybrid bio-composites, organic nano-fillers, dynamic shocking loading, ballistic impacts, energy absorption

Procedia PDF Downloads 100