Search results for: simulated annealing optimization
3315 Fluid-Structure Interaction Study of Fluid Flow past Marine Turbine Blade Designed by Using Blade Element Theory and Momentum Theory
Authors: Abu Afree Andalib, M. Mezbah Uddin, M. Rafiur Rahman, M. Abir Hossain, Rajia Sultana Kamol
Abstract:
This paper deals with the analysis of flow past the marine turbine blade which is designed by using the blade element theory and momentum theory for the purpose of using in the field of renewable energy. The designed blade is analyzed for various parameters using FSI module of Ansys. Computational Fluid Dynamics is used for the study of fluid flow past the blade and other fluidic phenomena such as lift, drag, pressure differentials, energy dissipation in water. Finite Element Analysis (FEA) module of Ansys was used to analyze the structural parameter such as stress and stress density, localization point, deflection, force propagation. Fine mesh is considered in every case for more accuracy in the result according to computational machine power. The relevance of design, search and optimization with respect to complex fluid flow and structural modeling is considered and analyzed. The relevancy of design and optimization with respect to complex fluid for minimum drag force using Ansys Adjoint Solver module is analyzed as well. The graphical comparison of the above-mentioned parameter using CFD and FEA and subsequently FSI technique is illustrated and found the significant conformity between both the results.Keywords: blade element theory, computational fluid dynamics, finite element analysis, fluid-structure interaction, momentum theory
Procedia PDF Downloads 3013314 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser
Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt
Abstract:
This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet
Procedia PDF Downloads 2143313 An Integration of Genetic Algorithm and Particle Swarm Optimization to Forecast Transport Energy Demand
Authors: N. R. Badurally Adam, S. R. Monebhurrun, M. Z. Dauhoo, A. Khoodaruth
Abstract:
Transport energy demand is vital for the economic growth of any country. Globalisation and better standard of living plays an important role in transport energy demand. Recently, transport energy demand in Mauritius has increased significantly, thus leading to an abuse of natural resources and thereby contributing to global warming. Forecasting the transport energy demand is therefore important for controlling and managing the demand. In this paper, we develop a model to predict the transport energy demand. The model developed is based on a system of five stochastic differential equations (SDEs) consisting of five endogenous variables: fuel price, population, gross domestic product (GDP), number of vehicles and transport energy demand and three exogenous parameters: crude birth rate, crude death rate and labour force. An interval of seven years is used to avoid any falsification of result since Mauritius is a developing country. Data available for Mauritius from year 2003 up to 2009 are used to obtain the values of design variables by applying genetic algorithm. The model is verified and validated for 2010 to 2012 by substituting the values of coefficients obtained by GA in the model and using particle swarm optimisation (PSO) to predict the values of the exogenous parameters. This model will help to control the transport energy demand in Mauritius which will in turn foster Mauritius towards a pollution-free country and decrease our dependence on fossil fuels.Keywords: genetic algorithm, modeling, particle swarm optimization, stochastic differential equations, transport energy demand
Procedia PDF Downloads 3693312 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite
Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar
Abstract:
This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts Grey Relational Analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.Keywords: metal matrix composite, drilling, optimization, step drill, surface roughness, burr height, hole diameter error
Procedia PDF Downloads 3193311 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective
Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli
Abstract:
In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks
Procedia PDF Downloads 823310 Self-Energy Sufficiency Assessment of the Biorefinery Annexed to a Typical South African Sugar Mill
Authors: M. Ali Mandegari, S. Farzad, , J. F. Görgens
Abstract:
Sugar is one of the main agricultural industries in South Africa and approximately livelihoods of one million South Africans are indirectly dependent on sugar industry which is economically struggling with some problems and should re-invent in order to ensure a long-term sustainability. Second generation biorefinery is defined as a process to use waste fibrous for the production of biofuel, chemicals animal food, and electricity. Bioethanol is by far the most widely used biofuel for transportation worldwide and many challenges in front of bioethanol production were solved. Biorefinery annexed to the existing sugar mill for production of bioethanol and electricity is proposed to sugar industry and is addressed in this study. Since flowsheet development is the key element of the bioethanol process, in this work, a biorefinery (bioethanol and electricity production) annexed to a typical South African sugar mill considering 65ton/h dry sugarcane bagasse and tops/trash as feedstock was simulated. Aspen PlusTM V8.6 was applied as simulator and realistic simulation development approach was followed to reflect the practical behaviour of the plant. Latest results of other researches considering pretreatment, hydrolysis, fermentation, enzyme production, bioethanol production and other supplementary units such as evaporation, water treatment, boiler, and steam/electricity generation units were adopted to establish a comprehensive biorefinery simulation. Steam explosion with SO2 was selected for pretreatment due to minimum inhibitor production and simultaneous saccharification and fermentation (SSF) configuration was adopted for enzymatic hydrolysis and fermentation of cellulose and hydrolyze. Bioethanol purification was simulated by two distillation columns with side stream and fuel grade bioethanol (99.5%) was achieved using molecular sieve in order to minimize the capital and operating costs. Also boiler and steam/power generation were completed using industrial design data. Results indicates that the annexed biorefinery can be self-energy sufficient when 35% of feedstock (tops/trash) bypass the biorefinery process and directly be loaded to the boiler to produce sufficient steam and power for sugar mill and biorefinery plant.Keywords: biorefinery, self-energy sufficiency, tops/trash, bioethanol, electricity
Procedia PDF Downloads 5383309 The Role of Metaheuristic Approaches in Engineering Problems
Authors: Ferzat Anka
Abstract:
Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems
Procedia PDF Downloads 773308 Research on Public Space Optimization Strategies for Existing Settlements Based on Intergenerational Friendliness
Authors: Huanhuan Qiang, Sijia Jin
Abstract:
Population aging has become a global trend, and China has entered an aging society, implementing an active aging system focused on home and community-based care. However, most urban communities where elderly people live face issues such as monotonous planning, unappealing landscapes, and inadequate aging infrastructure, which do not meet the requirements for active aging. Intergenerational friendliness and mutual assistance are key components in China's active aging policy framework. Therefore, residential development should prioritize enhancing intergenerational friendliness. Residential and public spaces are central to community life and well-being, offering new and challenging venues to improve relationships among residents of different ages. They are crucial for developing intergenerational communities with diverse generations and non-blood relationships. This paper takes the Maigaoqiao community in Nanjing, China, as a case study, examining intergenerational interactions in public spaces. Based on Maslow's hierarchy of needs and using time geography analysis, it identifies the spatiotemporal behavior characteristics of intergenerational groups in outdoor activities. Then construct an intergenerational-friendly evaluation system and an IPA quadrant model for public spaces in residential areas. Lastly, it explores optimization strategies for public spaces to promote intergenerational friendly interactions, focusing on five aspects: accessibility, safety, functionality, a sense of belonging, and interactivity.Keywords: intergenerational friendliness, demand theory, spatiotemporal behavior, IPA analysis, existing residential public space
Procedia PDF Downloads 43307 Structural Damage Detection via Incomplete Model Data Using Output Data Only
Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation
Procedia PDF Downloads 3653306 First Order Moment Bounds on DMRL and IMRL Classes of Life Distributions
Authors: Debasis Sengupta, Sudipta Das
Abstract:
The class of life distributions with decreasing mean residual life (DMRL) is well known in the field of reliability modeling. It contains the IFR class of distributions and is contained in the NBUE class of distributions. While upper and lower bounds of the reliability distribution function of aging classes such as IFR, IFRA, NBU, NBUE, and HNBUE have discussed in the literature for a long time, there is no analogous result available for the DMRL class. We obtain the upper and lower bounds for the reliability function of the DMRL class in terms of first order finite moment. The lower bound is obtained by showing that for any fixed time, the minimization of the reliability function over the class of all DMRL distributions with a fixed mean is equivalent to its minimization over a smaller class of distribution with a special form. Optimization over this restricted set can be made algebraically. Likewise, the maximization of the reliability function over the class of all DMRL distributions with a fixed mean turns out to be a parametric optimization problem over the class of DMRL distributions of a special form. The constructive proofs also establish that both the upper and lower bounds are sharp. Further, the DMRL upper bound coincides with the HNBUE upper bound and the lower bound coincides with the IFR lower bound. We also prove that a pair of sharp upper and lower bounds for the reliability function when the distribution is increasing mean residual life (IMRL) with a fixed mean. This result is proved in a similar way. These inequalities fill a long-standing void in the literature of the life distribution modeling.Keywords: DMRL, IMRL, reliability bounds, hazard functions
Procedia PDF Downloads 3973305 Study of Parameters Affecting the Electrostatic Attractions Force
Authors: Vahid Sabermand, Yousef Hojjat, Majid Hasanzadeh
Abstract:
This paper contains two main parts. In the first part of paper we simulated and studied three type of electrode patterns used in various industries for suspension and handling of the semiconductor and glass and we selected the best pattern by evaluating the electrostatic force, which was comb pattern electrode. In the second part, we investigated the parameters affecting the amount of electrostatic force such as the gap between surface and electrode (g), the electrode width (w), the gap between electrodes (t), the surface permittivity and electrode Length and methods of improvement of adhesion force by changing these values.Keywords: electrostatic force, electrostatic adhesion, electrostatic chuck, electrostatic application in industry, electroadhesive grippers
Procedia PDF Downloads 4033304 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation
Authors: Ekin Nurbaş
Abstract:
One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing
Procedia PDF Downloads 1473303 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation
Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu
Abstract:
This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.Keywords: machine learning, neural network, pressurized water reactor, supervisory controller
Procedia PDF Downloads 1563302 Reducing The Frequency of Flooding Accompanied by Low pH Wastewater In 100/200 Unit of Phosphate Fertilizer 1 Plant by Implementing The 3R Program (Reduce, Reuse and Recycle)
Authors: Pradipta Risang Ratna Sambawa, Driya Herseta, Mahendra Fajri Nugraha
Abstract:
In 2020, PT Petrokimia Gresik implemented a program to increase the ROP (Run Of Pile) production rate at the Phosphate Fertilizer 1 plant, causing an increase in scrubbing water consumption in the 100/200 area unit. This increase in water consumption causes a higher discharge of wastewater, which can further cause local flooding, especially during the rainy season. The 100/200 area of the Phosphate Fertilizer 1 plant is close to the warehouse and is often a passing area for trucks transporting raw materials. This causes the pH in the wastewater to become acidic (the worst point is up to pH 1). The problem of flooding and exposure to acidic wastewater in the 100/200 area of Phosphate Fertilizer Plant 1 was then resolved by PT Petrokimia Gresik through wastewater optimization steps called the 3R program (Reduce, Reuse, and Recycle). The 3R (Reduce, reuse, and recycle) program consists of an air consumption reduction program by considering the liquid/gas ratio in scrubbing unit of 100/200 Phosphate Fertilizer 1 plant, creating a wastewater interconnection line so that wastewater from unit 100/200 can be used as scrubbing water in the Phonska 1, Phonska 2, Phonska 3 and unit 300 Phosphate Fertilizer 1 plant and increasing scrubbing effectiveness through scrubbing effectiveness simulations. Through a series of wastewater optimization programs, PT Petrokimia Gresik has succeeded in reducing NaOH consumption for neutralization up to 2,880 kg/day or equivalent in saving up to 314,359.76 dollars/year and reducing process water consumption up to 600 m3/day or equivalent in saving up to 63,739.62 dollars/year.Keywords: fertilizer, phosphate fertilizer, wastewater, wastewater treatment, water management
Procedia PDF Downloads 263301 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method
Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi
Abstract:
This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method
Procedia PDF Downloads 3943300 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission
Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder
Abstract:
Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR
Procedia PDF Downloads 3413299 Low Voltage Ride through Capability Techniques for DFIG-Based Wind Turbines
Authors: Sherif O. Zain Elabideen, Ahmed A. Helal, Ibrahim F. El-Arabawy
Abstract:
Due to the drastic increase of the wind turbines installed capacity; the grid codes are increasing the restrictions aiming to treat the wind turbines like other conventional sources sooner. In this paper, an intensive review has been presented for different techniques used to add low voltage ride through capability to Doubly Fed Induction Generator (DFIG) wind turbine. A system model with 1.5 MW DFIG wind turbine is constructed and simulated using MATLAB/SIMULINK to explore the effectiveness of the reviewed techniques.Keywords: DFIG, grid side converters, low voltage ride through, wind turbine
Procedia PDF Downloads 4253298 Simulation of Flow Patterns in Vertical Slot Fishway with Cylindrical Obstacles
Authors: Mohsen Solimani Babarsad, Payam Taheri
Abstract:
Numerical results of vertical slot fishways with and without cylinders study are presented. The simulated results and the measured data in the fishways are compared to validate the application of the model. This investigation is made using FLUENT V.6.3, a Computational Fluid Dynamics solver. Advantages of using these types of numerical tools are the possibility of avoiding the St.-Venant equations’ limitations, and turbulence can be modeled by means of different models such as the k-ε model. In general, the present study has demonstrated that the CFD model could be useful for analysis and design of vertical slot fishways with cylinders.Keywords: slot Fish-way, CFD, k-ε model, St.-Venant equations’
Procedia PDF Downloads 3633297 Multi-Criteria Decision Making Network Optimization for Green Supply Chains
Authors: Bandar A. Alkhayyal
Abstract:
Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains
Procedia PDF Downloads 1603296 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 1703295 A Simulation Study of Direct Injection Compressed Natural Gas Spark Ignition Engine Performance Utilizing Turbulent Jet Ignition with Controlled Air Charge
Authors: Siyamak Ziyaei, Siti Khalijah Mazlan, Petros Lappas
Abstract:
Compressed Natural Gas (CNG) mainly consists of Methane CH₄ and has a low carbon to hydrogen ratio relative to other hydrocarbons. As a result, it has the potential to reduce CO₂ emissions by more than 20% relative to conventional fuels like diesel or gasoline Although Natural Gas (NG) has environmental advantages compared to other hydrocarbon fuels whether they are gaseous or liquid, its main component, CH₄, burns at a slower rate than conventional fuels A higher pressure and a leaner cylinder environment will overemphasize slow burn characteristic of CH₄. Lean combustion and high compression ratios are well-known methods for increasing the efficiency of internal combustion engines. In order to achieve successful CNG lean combustion in Spark Ignition (SI) engines, a strong ignition system is essential to avoid engine misfires, especially in ultra-lean conditions. Turbulent Jet Ignition (TJI) is an ignition system that employs a pre-combustion chamber to ignite the lean fuel mixture in the main combustion chamber using a fraction of the total fuel per cycle. TJI enables ultra-lean combustion by providing distributed ignition sites through orifices. The fast burn rate provided by TJI enables the ordinary SI engine to be comparable to other combustion systems such as Homogeneous Charge Compression Ignition (HCCI) or Controlled Auto-Ignition (CAI) in terms of thermal efficiency, through the increased levels of dilution without the need of sophisticated control systems. Due to the physical geometry of TJIs, which contain small orifices that connect the prechamber to the main chamber, scavenging is one of the main factors that reduce TJI performance. Specifically, providing the right mixture of fuel and air has been identified as a key challenge. The reason for this is the insufficient amount of air that is pushed into the pre-chamber during each compression stroke. There is also the problem that combustion residual gases such as CO₂, CO and NOx from the previous combustion cycle dilute the pre- chamber fuel-air mixture preventing rapid combustion in the pre-chamber. An air-controlled active TJI is presented in this paper in order to address these issues. By applying air to the pre-chamber at a sufficient pressure, residual gases are exhausted, and the air-fuel ratio is controlled within the pre-chamber, thereby improving the quality of combustion. This paper investigates the 3D-simulated combustion characteristics of a Direct Injected (DI-CNG) fuelled SI en- gine with a pre-chamber equipped with an air channel by using AVL FIRE software. Experiments and simulations were performed at the Worldwide Mapping Point (WWMP) at 1500 Revolutions Per Minute (RPM), 3.3 bar Indicated Mean Effective Pressure (IMEP), using only conventional spark plugs as the baseline. After validating simulation data, baseline engine conditions were set for all simulation scenarios at λ=1. Following that, the pre-chambers with and without an auxiliary fuel supply were simulated. In the simulated (DI-CNG) SI engine, active TJI was observed to perform better than passive TJI and spark plug. In conclusion, the active pre-chamber with an air channel demon-strated an improved thermal efficiency (ηth) over other counterparts and conventional spark ignition systems.Keywords: turbulent jet ignition, active air control turbulent jet ignition, pre-chamber ignition system, active and passive pre-chamber, thermal efficiency, methane combustion, internal combustion engine combustion emissions
Procedia PDF Downloads 873294 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone
Authors: Wenjuan Lu
Abstract:
China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe
Procedia PDF Downloads 1803293 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 883292 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 613291 Numerical Modelling of Wind Dispersal Seeds of Bromeliad Tillandsia recurvata L. (L.) Attached to Electric Power Lines
Authors: Bruna P. De Souza, Ricardo C. De Almeida
Abstract:
In some cities in the State of Parana – Brazil and in other countries atmospheric bromeliads (Tillandsia spp - Bromeliaceae) are considered weeds in trees, electric power lines, satellite dishes and other artificial supports. In this study, a numerical model was developed to simulate the seed dispersal of the Tillandsia recurvata species by wind with the objective of evaluating seeds displacement in the city of Ponta Grossa – PR, Brazil, since it is considered that the region is already infested. The model simulates the dispersal of each individual seed integrating parameters from the atmospheric boundary layer (ABL) and the local wind, simulated by the Weather Research Forecasting (WRF) mesoscale atmospheric model for the 2012 to 2015 period. The dispersal model also incorporates the approximate number of bromeliads and source height data collected from most infested electric power lines. The seeds terminal velocity, which is an important input data but was not available in the literature, was measured by an experiment with fifty-one seeds of Tillandsia recurvata. Wind is the main dispersal agent acting on plumed seeds whereas atmospheric turbulence is a determinant factor to transport the seeds to distances beyond 200 meters as well as to introduce random variability in the seed dispersal process. Such variability was added to the model through the application of an Inverse Fast Fourier Transform to wind velocity components energy spectra based on boundary-layer meteorology theory and estimated from micrometeorological parameters produced by the WRF model. Seasonal and annual wind means were obtained from the surface wind data simulated by WRF for Ponta Grossa. The mean wind direction is assumed to be the most probable direction of bromeliad seed trajectory. Moreover, the atmospheric turbulence effect and dispersal distances were analyzed in order to identify likely regions of infestation around Ponta Grossa urban area. It is important to mention that this model could be applied to any species and local as long as seed’s biological data and meteorological data for the region of interest are available.Keywords: atmospheric turbulence, bromeliad, numerical model, seed dispersal, terminal velocity, wind
Procedia PDF Downloads 1413290 Flow Sheet Development and Simulation of a Bio-refinery Annexed to Typical South African Sugar Mill
Authors: M. Ali Mandegari, S. Farzad, J. F. Görgens
Abstract:
Sugar is one of the main agricultural industries in South Africa and approximately livelihoods of one million South Africans are indirectly dependent on sugar industry which is economically struggling with some problems and should re-invent in order to ensure a long-term sustainability. Second generation bio-refinery is defined as a process to use waste fibrous for the production of bio-fuel, chemicals animal food, and electricity. Bio-ethanol is by far the most widely used bio-fuel for transportation worldwide and many challenges in front of bio-ethanol production were solved. Bio-refinery annexed to the existing sugar mill for production of bio-ethanol and electricity is proposed to sugar industry and is addressed in this study. Since flow-sheet development is the key element of the bio-ethanol process, in this work, a bio-refinery (bio-ethanol and electricity production) annexed to a typical South African sugar mill considering 65ton/h dry sugarcane bagasse and tops/trash as feedstock was simulated. Aspen PlusTM V8.6 was applied as simulator and realistic simulation development approach was followed to reflect the practical behavior of the plant. Latest results of other researches considering pretreatment, hydrolysis, fermentation, enzyme production, bio-ethanol production and other supplementary units such as evaporation, water treatment, boiler, and steam/electricity generation units were adopted to establish a comprehensive bio-refinery simulation. Steam explosion with SO2 was selected for pretreatment due to minimum inhibitor production and simultaneous saccharification and fermentation (SSF) configuration was adopted for enzymatic hydrolysis and fermentation of cellulose and hydrolyze. Bio-ethanol purification was simulated by two distillation columns with side stream and fuel grade bio-ethanol (99.5%) was achieved using molecular sieve in order to minimize the capital and operating costs. Also boiler and steam/power generation were completed using industrial design data. Results indicates 256.6 kg bio ethanol per ton of feedstock and 31 MW surplus power were attained from bio-refinery while the process consumes 3.5, 3.38, and 0.164 (GJ/ton per ton of feedstock) hot utility, cold utility and electricity respectively. Developed simulation is a threshold of variety analyses and developments for further studies.Keywords: bio-refinery, bagasse, tops, trash, bio-ethanol, electricity
Procedia PDF Downloads 5333289 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods
Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar
Abstract:
Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman
Procedia PDF Downloads 1963288 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 3363287 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization
Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi
Abstract:
Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm
Procedia PDF Downloads 823286 35 MHz Coherent Plane Wave Compounding High Frequency Ultrasound Imaging
Authors: Chih-Chung Huang, Po-Hsun Peng
Abstract:
Ultrasound transient elastography has become a valuable tool for many clinical diagnoses, such as liver diseases and breast cancer. The pathological tissue can be distinguished by elastography due to its stiffness is different from surrounding normal tissues. An ultrafast frame rate of ultrasound imaging is needed for transient elastography modality. The elastography obtained in the ultrafast system suffers from a low quality for resolution, and affects the robustness of the transient elastography. In order to overcome these problems, a coherent plane wave compounding technique has been proposed for conventional ultrasound system which the operating frequency is around 3-15 MHz. The purpose of this study is to develop a novel beamforming technique for high frequency ultrasound coherent plane-wave compounding imaging and the simulated results will provide the standards for hardware developments. Plane-wave compounding imaging produces a series of low-resolution images, which fires whole elements of an array transducer in one shot with different inclination angles and receives the echoes by conventional beamforming, and compounds them coherently. Simulations of plane-wave compounding image and focused transmit image were performed using Field II. All images were produced by point spread functions (PSFs) and cyst phantoms with a 64-element linear array working at 35MHz center frequency, 55% bandwidth, and pitch of 0.05 mm. The F number is 1.55 in all the simulations. The simulated results of PSFs and cyst phantom which were obtained using single, 17, 43 angles plane wave transmission (angle of each plane wave is separated by 0.75 degree), and focused transmission. The resolution and contrast of image were improved with the number of angles of firing plane wave. The lateral resolutions for different methods were measured by -10 dB lateral beam width. Comparison of the plane-wave compounding image and focused transmit image, both images exhibited the same lateral resolution of 70 um as 37 angles were performed. The lateral resolution can reach 55 um as the plane-wave was compounded 47 angles. All the results show the potential of using high-frequency plane-wave compound imaging for realizing the elastic properties of the microstructure tissue, such as eye, skin and vessel walls in the future.Keywords: plane wave imaging, high frequency ultrasound, elastography, beamforming
Procedia PDF Downloads 539