Search results for: physic-chemical parameters
6293 Twisted Bilayer Crescent Chiral Metasurface
Authors: Semere Araya Asefa
Abstract:
I described twisted bilayer crescent metasurfaces that link optical properties between two layers and enhance circular dichroism. The interactions between the metasurface layers cause circular dichroism. And we evaluated the parameters that affect the chiroptical response of the crescentKeywords: chiroptical response, chiral metasurface, circular dichroism, chiral sensing
Procedia PDF Downloads 806292 Comprehensive Analysis and Optimization of Alkaline Water Electrolysis for Green Hydrogen Production: Experimental Validation, Simulation Study, and Cost Analysis
Authors: Umair Ahmed, Muhammad Bin Irfan
Abstract:
This study focuses on designing and optimization of an alkaline water electrolyser for the production of green hydrogen. The aim is to enhance the durability and efficiency of this technology while simultaneously reducing the cost associated with the production of green hydrogen. The experimental results obtained from the alkaline water electrolyser are compared with simulated results using Aspen Plus software, allowing a comprehensive analysis and evaluation. To achieve the aforementioned goals, several design and operational parameters are investigated. The electrode material, electrolyte concentration, and operating conditions are carefully selected to maximize the efficiency and durability of the electrolyser. Additionally, cost-effective materials and manufacturing techniques are explored to decrease the overall production cost of green hydrogen. The experimental setup includes a carefully designed alkaline water electrolyser, where various performance parameters (such as hydrogen production rate, current density, and voltage) are measured. These experimental results are then compared with simulated data obtained using Aspen Plus software. The simulation model is developed based on fundamental principles and validated against the experimental data. The comparison between experimental and simulated results provides valuable insight into the performance of an alkaline water electrolyser. It helps to identify the areas where improvements can be made, both in terms of design and operation, to enhance the durability and efficiency of the system. Furthermore, the simulation results allow cost analysis providing an estimate of the overall production cost of green hydrogen. This study aims to develop a comprehensive understanding of alkaline water electrolysis technology. The findings of this research can contribute to the development of more efficient and durable electrolyser technology while reducing the cost associated with this technology. Ultimately, these advancements can pave the way for a more sustainable and economically viable hydrogen economy.Keywords: sustainable development, green energy, green hydrogen, electrolysis technology
Procedia PDF Downloads 906291 Analysis of Energy Flows as An Approach for The Formation of Monitoring System in the Sustainable Regional Development
Authors: Inese Trusina, Elita Jermolajeva
Abstract:
Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the developmenton the way to social well-being in the frame of the ecological economics paradigm. The article presentsbasic definitions for the development of formalized description of sustainabledevelopment monitoring. It provides examples of calculating the parameters of monitoring for the Baltic Sea region countries and their primary interpretation.Keywords: sustainability, development, power, ecological economics, regional economic, monitoring
Procedia PDF Downloads 1206290 The Effect of Postural Sway and Technical Parameters of 8 Weeks Technical Training Performed with Restrict of Visual Input on the 10-12 Ages Soccer Players
Authors: Nurtekin Erkmen, Turgut Kaplan, Halil Taskin, Ahmet Sanioglu, Gokhan Ipekoglu
Abstract:
The aim of this study was to determine the effects of an 8 week soccerspecific technical training with limited vision perception on postural control and technical parameters in 10-12 aged soccer players. Subjects in this study were 24 male young soccer players (age: 11.00 ± 0.56 years, height: 150.5 ± 4.23 cm, body weight: 41.49 ± 7.56 kg). Subjects were randomly divided as two groups: Training and control. Balance performance was measured by Biodex Balance System (BBS). Short pass, speed dribbling, 20 m speed with ball, ball control, juggling tests were used to measure soccer players’ technical performances with a ball. Subjects performed soccer training 3 times per week for 8 weeks. In each session, training group with limited vision perception and control group with normal vision perception committed soccer-specific technical drills for 20 min. Data analyzed with t-test for independent samples and Mann-Whitney U between groups and paired t-test and Wilcoxon test between pre-posttests. No significant difference was found balance scores and with eyes open and eyes closed and LOS test between training and control groups after training (p>0.05). After eight week of training there are no significant difference in balance score with eyes open for both training and control groups (p>0.05). Balance scores decreased in training and control groups after the training (p<0.05). The completion time of LOS test shortened in both training and control groups after training (p<0.05). The training developed speed dribbling performance of training group (p<0.05). On the other hand, soccer players’ performance in training and control groups increased in 20 m speed with a ball after eight week training (p<0.05). In conclusion; the results of this study indicate that soccer-specific training with limited vision perception may not improves balance performance in 10-12 aged soccer players, but it develops speed dribbling performance.Keywords: Young soccer players, vision perception, postural control, technical
Procedia PDF Downloads 4696289 Impact of Varying Malting and Fermentation Durations on Specific Chemical, Functional Properties, and Microstructural Behaviour of Pearl Millet and Sorghum Flour Using Response Surface Methodology
Authors: G. Olamiti; TK. Takalani; D. Beswa, AIO Jideani
Abstract:
The study investigated the effects of malting and fermentation times on some chemical, functional properties and microstructural behaviour of Agrigreen, Babala pearl millet cultivars and sorghum flours using response surface methodology (RSM). Central Composite Rotatable Design (CCRD) was performed on two independent variables: malting and fermentation times (h), at intervals of 24, 48, and 72, respectively. The results of dependent parameters such as pH, titratable acidity (TTA), Water absorption capacity (WAC), Oil absorption capacity (OAC), bulk density (BD), dispersibility and microstructural behaviour of the flours studied showed a significant difference in p < 0.05 upon malting and fermentation time. Babala flour exhibited a higher pH value at 4.78 at 48 h malted and 81.9 fermentation times. Agrigreen flour showed a higher TTA value at 0.159% at 81.94 h malted and 48 h fermentation times. WAC content was also higher in malted and fermented Babala flour at 2.37 ml g-1 for 81.94 h malted and 48 h fermentation time. Sorghum flour exhibited the least OAC content at 1.67 ml g-1 at 14 h malted and 48 h fermentation times. Agrigreen flour recorded the least bulk density, at 0.53 g ml-1 for 72 h malted and 24 h fermentation time. Sorghum flour exhibited a higher content of dispersibility, at 56.34%, after 24 h malted and 72 h fermented time. The response surface plots showed that increased malting and fermentation time influenced the dependent parameters. The microstructure behaviour of malting and fermentation times of pearl millet varieties and sorghum flours showed isolated, oval, spherical, or polygonal to smooth surfaces. The optimal processing conditions, such as malting and fermentation time for Agrigreen, were 32.24 h and 63.32 h; 35.18 h and 34.58 h for Babala; and 36.75 h and 47.88 h for sorghum with high desirability of 1.00. The validation of the optimum processing malting and fermentation times (h) on the dependent improved the experimented values. Food processing companies can use the study's findings to improve food processing and quality.Keywords: Pearl millet, malting, fermentation, microstructural behaviour
Procedia PDF Downloads 716288 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States
Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss
Abstract:
Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.Keywords: Alzheimer’s disease, budget, dementia, diagnosis.
Procedia PDF Downloads 1386287 Effect of Floods on Water Quality: A Global Review and Analysis
Authors: Apoorva Bamal, Agnieszka Indiana Olbert
Abstract:
Floods are known to be one of the most devastating hydro-climatic events, impacting a wide range of stakeholders in terms of environmental, social and economic losses. With difference in inundation durations and level of impact, flood hazards are of different degrees and strength. Amongst various set of domains being impacted by floods, environmental degradation in terms of water quality deterioration is one of the majorly effected but less highlighted domains across the world. The degraded water quality is caused by numerous natural and anthropogenic factors that are both point and non-point sources of pollution. Therefore, it is essential to understand the nature and source of the water pollution due to flooding. The major impact of floods is not only on the physico-chemical water quality parameters, but also on the biological elements leading to a vivid influence on the aquatic ecosystem. This deteriorated water quality is impacting many water categories viz. agriculture, drinking water, aquatic habitat, and miscellaneous services requiring an appropriate water quality to survive. This study identifies, reviews, evaluates and assesses multiple researches done across the world to determine the impact of floods on water quality. With a detailed statistical analysis of top relevant researches, this study is a synopsis of the methods used in assessment of impact of floods on water quality in different geographies, and identifying the gaps for further abridgement. As per majority of the studies, different flood magnitudes have varied impact on the water quality parameters leading to either increased or decreased values as compared to the recommended values for various categories. There is also an evident shift of the biological elements in the impacted waters leading to a change in its phenology and inhabitants of the specified water body. This physical, chemical and biological water quality degradation by floods is dependent upon its duration, extent, magnitude and flow direction. Therefore, this research provides an overview into the multiple impacts of floods on water quality, along with a roadmap of way forward to an efficient and uniform linkage of floods and impacted water quality dynamics.Keywords: floods, statistical analysis, water pollution, water quality
Procedia PDF Downloads 816286 Leptin Levels in Cord Blood and Their Associations with the Birth of Small, Large and Appropriate for Gestational Age Infants in Southern Sri Lanka
Authors: R. P. Hewawasam, M. H. A. D. de Silva, M. A. G. Iresha
Abstract:
In recent years childhood obesity has increased to pan-epidemic proportions along with a concomitant increase in obesity-associated morbidity. Birth weight is an important determinant of later adult health, with neonates at both ends of the birth weight spectrum at risk of future health complications. Consequently, infants who are born large for gestational age (LGA) are more likely to be obese in childhood and adolescence and are at risk of cardiovascular and metabolic complications later in life. Adipose tissue plays a role in linking events in fetal growth to the subsequent development of adult diseases. In addition to its role as a storage depot for fat, adipose tissue produces and secrets a number of hormones of importance in modulating metabolism and energy homeostasis. Cord blood leptin level has been positively correlated with fetal adiposity at birth. It is established that Asians have lower skeletal muscle mass, low bone mineral content and excess body fat for a given body mass index indicating a genetic predisposition in the occurrence of obesity. To our knowledge, studies have never been conducted in Sri Lanka to determine the relationship between adipocytokine profile in cord blood and anthropometric parameters in newborns. Thus, the objective of this study is to establish the above relationship for the Sri Lankan population to implement awareness programs to minimize childhood obesity in the future. Umbilical cord blood was collected from 90 newborns (Male 40, Female 50; gestational age 35-42 weeks) after double clamping the umbilical cord before separation of the placenta and the concentration of leptin was measured by ELISA technique. Anthropometric parameters of the newborn such as birth weight, length, ponderal index, occipital frontal, chest, hip and calf circumferences were measured. Pearson’s correlation was used to assess the relationship between leptin and anthropometric parameters while the Mann-Whitney U test was used to assess the differences in cord blood leptin levels between small for gestational age (SGA), appropriate for gestational age (AGA) and LGA infants. There was a significant difference (P < 0.05) between the cord blood leptin concentrations of LGA infants (12.67 ng/mL ± 2.34) and AGA infants (7.10 ng/mL ± 0.90). However, a significant difference was not observed between leptin levels of SGA infants (8.86 ng/mL ± 0.70) and AGA infants. In both male and female neonates, umbilical leptin levels showed significant positive correlations (P < 0.05) with birth weight of the newborn, pre-pregnancy maternal weight and pre pregnancy BMI between the infants of large and appropriate for gestational ages. Increased concentrations of leptin levels in the cord blood of large for gestational age infants suggest that they may be involved in regulating fetal growth. Leptin concentration of Sri Lankan population was not significantly deviated from published data of Asian populations. Fetal leptin may be an important predictor of neonatal adiposity; however, interventional studies are required to assess its impact on the possible risk of childhood obesity.Keywords: appropriate for gestational age, childhood obesity, leptin, anthropometry
Procedia PDF Downloads 1886285 Magnetic Properties of Nickel Oxide Nanoparticles in Superparamagnetic State
Authors: Navneet Kaur, S. D. Tiwari
Abstract:
Superparamagnetism is an interesting phenomenon and observed in small particles of magnetic materials. It arises due to a reduction in particle size. In the superparamagnetic state, as the thermal energy overcomes magnetic anisotropy energy, the magnetic moment vector of particles flip their magnetization direction between states of minimum energy. Superparamagnetic nanoparticles have been attracting the researchers due to many applications such as information storage, magnetic resonance imaging, biomedical applications, and sensors. For information storage, thermal fluctuations lead to loss of data. So that nanoparticles should have high blocking temperature. And to achieve this, nanoparticles should have a higher magnetic moment and magnetic anisotropy constant. In this work, the magnetic anisotropy constant of the antiferromagnetic nanoparticles system is determined. Magnetic studies on nanoparticles of NiO (nickel oxide) are reported well. This antiferromagnetic nanoparticle system has high blocking temperature and magnetic anisotropy constant of order 105 J/m3. The magnetic study of NiO nanoparticles in the superparamagnetic region is presented. NiO particles of two different sizes, i.e., 6 and 8 nm, are synthesized using the chemical route. These particles are characterized by an x-ray diffractometer, transmission electron microscope, and superconducting quantum interference device magnetometry. The magnetization vs. applied magnetic field and temperature data for both samples confirm their superparamagnetic nature. The blocking temperature for 6 and 8 nm particles is found to be 200 and 172 K, respectively. Magnetization vs. applied magnetic field data of NiO is fitted to an appropriate magnetic expression using a non-linear least square fit method. The role of particle size distribution and magnetic anisotropy is taken in to account in magnetization expression. The source code is written in Python programming language. This fitting provides us the magnetic anisotropy constant for NiO and other magnetic fit parameters. The particle size distribution estimated matches well with the transmission electron micrograph. The value of magnetic anisotropy constants for 6 and 8 nm particles is found to be 1.42 X 105 and 1.20 X 105 J/m3, respectively. The obtained magnetic fit parameters are verified using the Neel model. It is concluded that the effect of magnetic anisotropy should not be ignored while studying the magnetization process of nanoparticles.Keywords: anisotropy, superparamagnetic, nanoparticle, magnetization
Procedia PDF Downloads 1346284 On the Mathematical Modelling of Aggregative Stability of Disperse Systems
Authors: Arnold M. Brener, Lesbek Tashimov, Ablakim S. Muratov
Abstract:
The paper deals with the special model for coagulation kernels which represents new control parameters in the Smoluchowski equation for binary aggregation. On the base of the model the new approach to evaluating aggregative stability of disperse systems has been submitted. With the help of this approach the simple estimates for aggregative stability of various types of hydrophilic nano-suspensions have been obtained.Keywords: aggregative stability, coagulation kernels, disperse systems, mathematical model
Procedia PDF Downloads 3096283 CFD Simulation of a Large Scale Unconfined Hydrogen Deflagration
Authors: I. C. Tolias, A. G. Venetsanos, N. Markatos
Abstract:
In the present work, CFD simulations of a large scale open deflagration experiment are performed. Stoichiometric hydrogen-air mixture occupies a 20 m hemisphere. Two combustion models are compared and are evaluated against the experiment. The Eddy Dissipation Model and a Multi-physics combustion model which is based on Yakhot’s equation for the turbulent flame speed. The values of models’ critical parameters are investigated. The effect of the turbulence model is also examined. k-ε model and LES approach were tested.Keywords: CFD, deflagration, hydrogen, combustion model
Procedia PDF Downloads 5026282 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions
Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier
Abstract:
Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).Keywords: dispersibility, stability, Hansen parameters, particles, solvents
Procedia PDF Downloads 1106281 Numerical Investigation of Fluid Outflow through a Retinal Hole after Scleral Buckling
Authors: T. Walczak, J. K. Grabski, P. Fritzkowski, M. Stopa
Abstract:
Objectives of the study are i) to perform numerical simulations that permit an analysis of the dynamics of subretinal fluid when an implant has induced scleral intussusception and ii) assess the impact of the physical parameters of the model on the flow rate. Computer simulations were created using finite element method (FEM) based on a model that takes into account the interaction of a viscous fluid (subretinal fluid) with a hyperelastic body (retina). The purpose of the calculation was to investigate the dependence of the flow rate of subretinal fluid through a hole in the retina on different factors such as viscosity of subretinal fluid, material parameters of the retina, and the offset of the implant from the retina’s hole. These simulations were performed for different speeds of eye movement that reflect the behavior of the eye when reading, REM, and saccadic movements. Similar to other works in the field of subretinal fluid flow, it was assumed stationary, single sided, forced fluid flow in the considered area simulating the subretinal space. Additionally, a hyperelastic material model of the retina and parameterized geometry of the considered model was adopted. The calculations also examined the influence the direction of the force of gravity due to the position of the patient’s head on the trend of outflow of fluid. The simulations revealed that fluid outflow from the retina becomes significant with eyeball movement speed of 100°/sec. This speed is greater than in the case of reading but is four times less than saccadic movement. The increase of viscosity of the fluid increased beneficial effect. Further, the simulation results suggest that moderate eye movement speed is optimal and that the conventional prescription of the avoidance of routine eye movement following retinal detachment surgery should be relaxed. Additionally, to verify numerical results, some calculations were repeated with use of meshless method (method of fundamental solutions), which is relatively fast and easy to implement. The paper has been supported by 02/21/DSPB/3477 grant.Keywords: CFD simulations, FEM analysis, meshless method, retinal detachment
Procedia PDF Downloads 3436280 Co-Evolution of Urban Lake System and Rapid Urbanization: Case of Raipur, Chhattisgarh
Authors: Kamal Agrawal, Ved Prakash Nayak, Akshay Patil
Abstract:
Raipur is known as a city of water bodies. The city had around 200 man-made and natural lakes of varying sizes. These structures were constructed to collect rainwater and control flooding in the city. Due to the transition from community participation to state government, as well as rapid urbanisation, Raipur now has only about 80 lakes left. Rapid and unplanned growth has resulted in pollution, encroachment, and eutrophication of the city's lakes. The state government keeps these lakes in good condition by cleaning them and proposing lakefront developments. However, maintaining individual lakes is insufficient because urban lakes are not distinct entities. It is a system comprised of the lake, shore, catchment, and other components. While Urban lake system (ULS) is a combination of multiple such lake systems interacting in a complex urban setting. Thus, the project aims to propose a co-evolution model for urban lake systems (ULS) and rapid urbanization in Raipur. The goals are to comprehend the ULS and to identify elements and dimensions of urbanization that influence the ULS. Evaluate the impact of rapid urbanization on the ULS & vice versa in the study area. Determine how to maximize the positive impact while minimizing the negative impact identified in the study area. Propose short-, medium-, and long-term planning interventions to support the ULS's co-evolution with rapid urbanization. A complexity approach is used to investigate the ULS. It is a technique for understanding large, complex systems. A complex system is one with many interconnected and interdependent elements and dimensions. Thus, elements of ULS and rapid urbanization are identified through a literature study to evaluate statements of their impacts (Beneficial/ Adverse) on one another. Rapid urbanization has been identified as having elements such as demography, urban legislation, informal settlement, urban infrastructure, and tourism. Similarly, the catchment area of the lake, the lake's water quality, the water spread area, and lakefront developments are all being impacted by rapid urbanisation. These nine elements serve as parameters for the subsequent analysis. Elements are limited to physical parameters only. The city has designated a study area based on the definition provided by the National Plan for the Conservation of Aquatic Ecosystems. Three lakes are discovered within a one-kilometer radius, establishing a tiny urban lake system. Because the condition of a lake is directly related to the condition of its catchment area, the catchment area of these three lakes is delineated as the study area. Data is collected to identify impact statements, and the interdependence diagram generated between the parameters yields results in terms of interlinking between each parameter and their impact on the system as a whole. The planning interventions proposed for the ULS and rapid urbanisation co-evolution model include spatial proposals as well as policy recommendations for the short, medium, and long term. This study's next step will be to determine how to implement the proposed interventions based on the availability of resources, funds, and governance patterns.Keywords: urban lake system, co-evolution, rapid urbanization, complex system
Procedia PDF Downloads 736279 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator
Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty
Abstract:
Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state
Procedia PDF Downloads 2666278 Performance Analysis of Microelectromechanical Systems-Based Piezoelectric Energy Harvester
Authors: Sanket S. Jugade, Swapneel U. Naphade, Satyabodh M. Kulkarni
Abstract:
Microscale energy harvesters can be used to convert ambient mechanical vibrations to electrical energy. Such devices have great applications in low powered electronics in remote environments like powering wireless sensor nodes of Internet of Things, lightings on highways or in ships, etc. In this paper, a Microelectromechanical systems (MEMS) based energy harvester has been modeled using Analytical and Finite Element Method (FEM). The device consists of a microcantilever with a proof mass attached to its free end and a Polyvinylidene Fluoride (PVDF) piezoelectric thin film deposited on the surface of microcantilever in a unimorph or bimorph configuration. For the analytical method, the energy harvester was modeled as an equivalent electrical system in SIMULINK. The Finite element model was developed and analyzed using the commercial package COMSOL Multiphysics. The modal analysis was performed first to find the fundamental natural frequency and its variation with geometrical parameters of the system. Then the harmonic analysis was performed to find the input mechanical power, output electrical voltage, and power for a range of excitation frequencies and base acceleration values. The variation of output power with load resistance, PVDF film thickness, and damping values was also found out. The results from FEM were then validated with that of the analytical model. Finally, the performance of the device was optimized with respect to various electro-mechanical parameters. For a unimorph configuration consisting of single crystal silicon microcantilever of dimensions 8mm×2mm×80µm and proof mass of 9.32 mg with optimal values of the thickness of PVDF film and load resistance as 225 µm and 20 MΩ respectively, the maximum electrical power generated for base excitation of 0.2g at 630 Hz is 0.9 µW.Keywords: bimorph, energy harvester, FEM, harmonic analysis, MEMS, PVDF, unimorph
Procedia PDF Downloads 1906277 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 3376276 Investigating the Dynamic Plantar Pressure Distribution in Individuals with Multiple Sclerosis
Authors: Hilal Keklicek, Baris Cetin, Yeliz Salci, Ayla Fil, Umut Altinkaynak, Kadriye Armutlu
Abstract:
Objectives and Goals: Spasticity is a common symptom characterized with a velocity dependent increase in tonic stretch reflexes (muscle tone) in patient with multiple sclerosis (MS). Hypertonic muscles affect the normal plantigrade contact by disturbing accommodation of foot to the ground while walking. It is important to know the differences between healthy and neurologic foot features for management of spasticity related deformities and/or determination of rehabilitation purposes and contents. This study was planned with the aim of investigating the dynamic plantar pressure distribution in individuals with MS and determining the differences between healthy individuals (HI). Methods: Fifty-five individuals with MS (108 foot with spasticity according to Modified Ashworth Scale) and 20 HI (40 foot) were the participants of the study. The dynamic pedobarograph was utilized for evaluation of dynamic loading parameters. Participants were informed to walk at their self-selected speed for seven times to eliminate learning effect. The parameters were divided into 2 categories including; maximum loading pressure (N/cm2) and time of maximum pressure (ms) were collected from heal medial, heal lateral, mid foot, heads of first, second, third, fourth and fifth metatarsal bones. Results: There were differences between the groups in maximum loading pressure of heal medial (p < .001), heal lateral (p < .001), midfoot (p=.041) and 5th metatarsal areas (p=.036). Also, there were differences between the groups the time of maximum pressure of all metatarsal areas, midfoot, heal medial and heal lateral (p < .001) in favor of HI. Conclusions: The study provided basic data about foot pressure distribution in individuals with MS. Results of the study primarily showed that spasticity of lower extremity muscle disrupted the posteromedial foot loading. Secondarily, according to the study result, spasticity lead to inappropriate timing during load transfer from hind foot to forefoot.Keywords: multiple sclerosis, plantar pressure distribution, gait, norm values
Procedia PDF Downloads 3216275 Development of Power System Stability by Reactive Power Planning in Wind Power Plant With Doubley Fed Induction Generators Generator
Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Oriol Gomis Bellmunt, Vinicius Albernaz Lacerda Freitas
Abstract:
The use of distributed and renewable sources in power systems has grown significantly, recently. One the most popular sources are wind farms which have grown massively. However, ¬wind farms are connected to the grid, this can cause problems such as reduced voltage stability, frequency fluctuations and reduced dynamic stability. Variable speed generators (asynchronous) are used due to the uncontrollability of wind speed specially Doubley Fed Induction Generators (DFIG). The most important disadvantage of DFIGs is its sensitivity to voltage drop. In the case of faults, a large volume of reactive power is induced therefore, use of FACTS devices such as SVC and STATCOM are suitable for improving system output performance. They increase the capacity of lines and also passes network fault conditions. In this paper, in addition to modeling the reactive power control system in a DFIG with converter, FACTS devices have been used in a DFIG wind turbine to improve the stability of the power system containing two synchronous sources. In the following paper, recent optimal control systems have been designed to minimize fluctuations caused by system disturbances, for FACTS devices employed. For this purpose, a suitable method for the selection of nine parameters for MPSH-phase-post-phase compensators of reactive power compensators is proposed. The design algorithm is formulated ¬¬as an optimization problem searching for optimal parameters in the controller. Simulation results show that the proposed controller Improves the stability of the network and the fluctuations are at desired speed.Keywords: renewable energy sources, optimization wind power plant, stability, reactive power compensator, double-feed induction generator, optimal control, genetic algorithm
Procedia PDF Downloads 956274 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 696273 Gamma Irradiated Sodium Alginate and Phosphorus Fertilizer Enhances Seed Trigonelline Content, Biochemical Parameters and Yield Attributes of Fenugreek (Trigonella foenum-graecum L.)
Authors: Tariq Ahmad Dar, Moinuddin, M. Masroor A. Khan
Abstract:
There is considerable need in enhancing the content and yield of active constituents of medicinal plants keeping in view their massive demand worldwide. Different strategies have been employed to enhance the active constituents of medicinal plants and the use of phytohormones has been proved effective in this regard. Gamma-irradiated Sodium alginate (ISA) is known to elicit an array of plant defense responses and biological activities in plants. Considering the medicinal importance, a pot experiment was conducted to explore the effect of ISA and phosphorus on growth, yield and quality of fenugreek (Trigonella foenum-graecum L.). ISA spray treatments (0, 40, 80 and 120 mg L-1) were applied alone and in combination with 40 kg P ha-1 (P40). Crop performance was assessed in terms of plant growth characteristics, physiological attributes, seed yield and the content of seed trigonelline. Of the ten-treatments, P40 + 80 mg L−1 of ISA proved the best. The results showed that foliar spray of ISA alone or in combination with P40 augmented the plant vegetative growth, enzymatic activities, trigonelline content, trigonelline yield and economic yield of fenugreek. Application of 80 mg L−1 of ISA applied with P40 gave the best results for almost all the parameters studied compared to control or to 80 mg L−1 of ISA applied alone. This treatment increased the total content of chlorophyll, carotenoids, leaf -N, -P and -K and trigonelline compared to the control by 24.85 and 27.40%, 15 and 23.52%, 18.70 and 16.84%, 15.88 and 18.92%, 12 and 14.44%, at 60 and 90 DAS respectively. The combined application of 80 mg L−1 of ISA along with P40 resulted in the maximum increase in seed yield, trigonelline content and trigonelline yield by146, 34 and 232.41%, respectively, over the control. Gel permeation chromatography revealed the formation of low molecular weight fractions in ISA samples, containing even less than 20,000 molecular weight oligomers, which might be responsible for plant growth promotion in this study. Trigonelline content was determined by reverse phase high performance liquid chromatography (HPLC) with C-18 column.Keywords: gamma-irradiated sodium alginate, phosphorus, gel permeation chromatography, HPLC, trigonelline content, yield
Procedia PDF Downloads 3216272 ARGO: An Open Designed Unmanned Surface Vehicle Mapping Autonomous Platform
Authors: Papakonstantinou Apostolos, Argyrios Moustakas, Panagiotis Zervos, Dimitrios Stefanakis, Manolis Tsapakis, Nektarios Spyridakis, Mary Paspaliari, Christos Kontos, Antonis Legakis, Sarantis Houzouris, Konstantinos Topouzelis
Abstract:
For years unmanned and remotely operated robots have been used as tools in industry research and education. The rapid development and miniaturization of sensors that can be attached to remotely operated vehicles in recent years allowed industry leaders and researchers to utilize them as an affordable means for data acquisition in air, land, and sea. Despite the recent developments in the ground and unmanned airborne vehicles, a small number of Unmanned Surface Vehicle (USV) platforms are targeted for mapping and monitoring environmental parameters for research and industry purposes. The ARGO project is developed an open-design USV equipped with multi-level control hardware architecture and state-of-the-art sensors and payloads for the autonomous monitoring of environmental parameters in large sea areas. The proposed USV is a catamaran-type USV controlled over a wireless radio link (5G) for long-range mapping capabilities and control for a ground-based control station. The ARGO USV has a propulsion control using 2x fully redundant electric trolling motors with active vector thrust for omnidirectional movement, navigation with opensource autopilot system with high accuracy GNSS device, and communication with the 2.4Ghz digital link able to provide 20km of Line of Sight (Los) range distance. The 3-meter dual hull design and composite structure offer well above 80kg of usable payload capacity. Furthermore, sun and friction energy harvesting methods provide clean energy to the propulsion system. The design is highly modular, where each component or payload can be replaced or modified according to the desired task (industrial or research). The system can be equipped with Multiparameter Sonde, measuring up to 20 water parameters simultaneously, such as conductivity, salinity, turbidity, dissolved oxygen, etc. Furthermore, a high-end multibeam echo sounder can be installed in a specific boat datum for shallow water high-resolution seabed mapping. The system is designed to operate in the Aegean Sea. The developed USV is planned to be utilized as a system for autonomous data acquisition, mapping, and monitoring bathymetry and various environmental parameters. ARGO USV can operate in small or large ports with high maneuverability and endurance to map large geographical extends at sea. The system presents state of the art solutions in the following areas i) the on-board/real-time data processing/analysis capabilities, ii) the energy-independent and environmentally friendly platform entirely made using the latest aeronautical and marine materials, iii) the integration of advanced technology sensors, all in one system (photogrammetric and radiometric footprint, as well as its connection with various environmental and inertial sensors) and iv) the information management application. The ARGO web-based application enables the system to depict the results of the data acquisition process in near real-time. All the recorded environmental variables and indices are presented, allowing users to remotely access all the raw and processed information using the implemented web-based GIS application.Keywords: monitor marine environment, unmanned surface vehicle, mapping bythometry, sea environmental monitoring
Procedia PDF Downloads 1396271 Assessment of Advanced Oxidation Process Applicability for Household Appliances Wastewater Treatment
Authors: Pelin Yılmaz Çetiner, Metin Mert İlgün, Nazlı Çetindağ, Emine Birci, Gizemnur Yıldız Uysal, Özcan Hatipoğlu, Ehsan Tuzcuoğlu, Gökhan Sır
Abstract:
Water scarcity is an inevitable problem affecting more and more people day by day. It is a worldwide crisis and a consequence of rapid population growth, urbanization and overexploitation. Thus, the solutions providing the reclamation of the wastewater are the desired approach. Wastewater contains various substances such as organic, soaps and detergents, solvents, biological substances, and inorganic substances. The physical properties of the wastewater differs regarding to its origin such as commerical, domestic or hospital usage. Thus, the treatment strategy of this type of wastewater is should be comprehensively investigated and properly treated. The advanced oxidation process comes up as a hopeful method associated with the formation of reactive hydroxyl radicals that are highly reactive to oxidize of organic pollutants. This process has a priority on other methods such as coagulation, flocuation, sedimentation and filtration since it was not cause any undesirable by-products. In the present study, it was aimed to investigate the applicability of advanced oxidation process for the treatment of household appliances wastewater. For this purpose, the laboratory studies providing the effectively addressing of the formed radicals to organic pollutants were carried out. Then the effect of process parameters were comprehensively studied by using response surface methodology, Box-Benhken experimental desing. The final chemical oxygen demand (COD) was the main output to evaluate the optimum point providing the expected COD removal. The linear alkyl benzene sulfonate (LAS), total dissolved solids (TDS) and color were measured for the optimum point providing the expected COD removal. Finally, present study pointed out that advanced oxidation process might be efficiently preffered to treat of the household appliances wastewater and the optimum process parameters provided that expected removal of COD.Keywords: advanced oxidation process, household appliances wastewater, modelling, water reuse
Procedia PDF Downloads 646270 Seasonal Variability of Picoeukaryotes Community Structure Under Coastal Environmental Disturbances
Authors: Benjamin Glasner, Carlos Henriquez, Fernando Alfaro, Nicole Trefault, Santiago Andrade, Rodrigo De La Iglesia
Abstract:
A central question in ecology refers to the relative importance that local-scale variables have over community composition, when compared with regional-scale variables. In coastal environments, strong seasonal abiotic influence dominates these systems, weakening the impact of other parameters like micronutrients. After the industrial revolution, micronutrients like trace metals have increased in ocean as pollutants, with strong effects upon biotic entities and biological processes in coastal regions. Coastal picoplankton communities had been characterized as a cyanobacterial dominated fraction, but in recent years the eukaryotic component of this size fraction has gained relevance due to their high influence in carbon cycle, although, diversity patterns and responses to disturbances are poorly understood. South Pacific upwelling coastal environments represent an excellent model to study seasonal changes due to a strong influence in the availability of macro- and micronutrients between seasons. In addition, some well constrained coastal bays of this region have been subjected to strong disturbances due to trace metal inputs. In this study, we aim to compare the influence of seasonality and trace metals concentrations, on the community structure of planktonic picoeukaryotes. To describe seasonal patterns in the study area, satellite data in a 6 years time series and in-situ measurements with a traditional oceanographic approach such as CTDO equipment were performed. In addition, trace metal concentrations were analyzed trough ICP-MS analysis, for the same region. For biological data collection, field campaigns were performed in 2011-2012 and the picoplankton community was described by flow cytometry and taxonomical characterization with next-generation sequencing of ribosomal genes. The relation between the abiotic and biotic components was finally determined by multivariate statistical analysis. Our data show strong seasonal fluctuations in abiotic parameters such as photosynthetic active radiation and superficial sea temperature, with a clear differentiation of seasons. However, trace metal analysis allows identifying strong differentiation within the study area, dividing it into two zones based on trace metals concentration. Biological data indicate that there are no major changes in diversity but a significant fluctuation in evenness and community structure. These changes are related mainly with regional parameters, like temperature, but by analyzing the metal influence in picoplankton community structure, we identify a differential response of some plankton taxa to metal pollution. We propose that some picoeukaryotic plankton groups respond differentially to metal inputs, by changing their nutritional status and/or requirements under disturbances as a derived outcome of toxic effects and tolerance.Keywords: Picoeukaryotes, plankton communities, trace metals, seasonal patterns
Procedia PDF Downloads 1736269 Reverse Osmosis Application on Sewage Tertiary Treatment
Authors: Elisa K. Schoenell, Cristiano De Oliveira, Luiz R. H. Dos Santos, Alexandre Giacobbo, Andréa M. Bernardes, Marco A. S. Rodrigues
Abstract:
Water is an indispensable natural resource, which must be preserved to human activities as well the ecosystems. However, the sewage discharge has been contaminating water resources. Conventional treatment, such as physicochemical treatment followed by biological processes, has not been efficient to the complete degradation of persistent organic compounds, such as medicines and hormones. Therefore, the use of advanced technologies to sewage treatment has become urgent and necessary. The aim of this study was to apply Reverse Osmosis (RO) on sewage tertiary treatment from a Waste Water Treatment Plant (WWTP) in south Brazil. It was collected 200 L of sewage pre-treated by wetland with aquatic macrophytes. The sewage was treated in a RO pilot plant, using a polyamide membrane BW30-4040 model (DOW FILMTEC), with 7.2 m² membrane area. In order to avoid damage to the equipment, this system contains a pleated polyester filter with 5 µm pore size. It was applied 8 bar until achieve 5 times of concentration, obtaining 80% of recovery of permeate, with 10 L.min-1 of concentrate flow rate. Samples of sewage pre-treated on WWTP, permeate and concentrate generated on RO was analyzed for physicochemical parameters and by gas chromatography (GC) to qualitative analysis of organic compounds. The results proved that the sewage treated on WWTP does not comply with the limit of phosphorus and nitrogen of Brazilian legislation. Besides this, it was found many organic compounds in this sewage, such as benzene, which is carcinogenic. Analyzing permeate results, it was verified that the RO as sewage tertiary treatment was efficient to remove of physicochemical parameters, achieving 100% of iron, copper, zinc and phosphorus removal, 98% of color removal, 91% of BOD and 62% of ammoniacal nitrogen. RO was capable of removing organic compounds, however, it was verified the presence of some organic compounds on de RO permeate, showing that RO did not have the capacity of removal all organic compounds of sewage. It has to be considered that permeate showed lower intensity of peaks in chromatogram in comparison to the sewage of WWTP. It is important to note that the concentrate generate on RO needs a treatment before its disposal in environment.Keywords: organic compounds, reverse osmosis, sewage treatment, tertiary treatment
Procedia PDF Downloads 2026268 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet
Authors: Zeradam Yeshiwas, A. Krishnaia
Abstract:
The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior
Procedia PDF Downloads 626267 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder
Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo
Abstract:
Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion
Procedia PDF Downloads 1216266 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes
Authors: Sertac Arslan, Sezer Kefeli
Abstract:
In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (UKeywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows
Procedia PDF Downloads 1886265 Urban Flood Risk Mapping–a Review
Authors: Sherly M. A., Subhankar Karmakar, Terence Chan, Christian Rau
Abstract:
Floods are one of the most frequent natural disasters, causing widespread devastation, economic damage and threat to human lives. Hydrologic impacts of climate change and intensification of urbanization are two root causes of increased flood occurrences, and recent research trends are oriented towards understanding these aspects. Due to rapid urbanization, population of cities across the world has increased exponentially leading to improperly planned developments. Climate change due to natural and anthropogenic activities on our environment has resulted in spatiotemporal changes in rainfall patterns. The combined effect of both aggravates the vulnerability of urban populations to floods. In this context, an efficient and effective flood risk management with its core component as flood risk mapping is essential in prevention and mitigation of flood disasters. Urban flood risk mapping involves zoning of an urban region based on its flood risk, which depicts the spatiotemporal pattern of frequency and severity of hazards, exposure to hazards, and degree of vulnerability of the population in terms of socio-economic, environmental and infrastructural aspects. Although vulnerability is a key component of risk, its assessment and mapping is often less advanced than hazard mapping and quantification. A synergic effort from technical experts and social scientists is vital for the effectiveness of flood risk management programs. Despite an increasing volume of quality research conducted on urban flood risk, a comprehensive multidisciplinary approach towards flood risk mapping still remains neglected due to which many of the input parameters and definitions of flood risk concepts are imprecise. Thus, the objectives of this review are to introduce and precisely define the relevant input parameters, concepts and terms in urban flood risk mapping, along with its methodology, current status and limitations. The review also aims at providing thought-provoking insights to potential future researchers and flood management professionals.Keywords: flood risk, flood hazard, flood vulnerability, flood modeling, urban flooding, urban flood risk mapping
Procedia PDF Downloads 5906264 Sludge Densification: Emerging and Efficient Way to Look at Biological Nutrient Removal Treatment
Authors: Raj Chavan
Abstract:
Currently, there are over 14,500 Water Resource Recovery Facilities (WRRFs) in the United States, with ~35% of them having some type of nutrient limits in place. These WRRFs account for about 1% of overall power demand and 2% of total greenhouse gas emissions (GHG) in the United States and contribute for 10 to 15% of the overall nutrient load to surface rivers in the United States. The evolution of densification technologies toward more compact and energy-efficient nutrient removal processes has been impacted by a number of factors. Existing facilities that require capacity expansion or biomass densification for higher treatability within the same footprint are being subjected to more stringent requirements relating to nutrient removal prior to surface water discharge. Densification of activated sludge has received recent widespread interest as a means for achieving process intensification and nutrient removal at WRRFs. At the core of the technology are the aerobic sludge granules where the biological processes occur. There is considerable interest in the prospect of producing granular sludge in continuous (or traditional) activated sludge processes (CAS) or densification of biomass by moving activated sludge flocs to a denser aggregate of biomass as a highly effective technique of intensification. This presentation will provide a fundamental understanding of densification by presenting insights and practical issues. The topics that will be discussed include methods used to generate and retain densified granules; the mechanisms that allow biological flocs to densify; the role that physical selectors play in the densification of biological flocs; some viable ways for managing biological flocs that have become densified; effects of physical selection design parameters on the retention of densified biological flocs and finally some operational solutions for customizing the flocs and granules required to meet performance and capacity targets. In addition, it will present some case studies where biological and physical parameters were used to generate aerobic granular sludge in the continuous flow system.Keywords: densification, aerobic granular sludge, nutrient removal, intensification
Procedia PDF Downloads 186