Search results for: finite volume algorithm
5233 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion
Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong
Abstract:
The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor
Procedia PDF Downloads 2325232 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System
Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee
Abstract:
In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.Keywords: augmented reality framework, server-client model, vision-based tracking, image search
Procedia PDF Downloads 2755231 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)
Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton
Abstract:
Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference
Procedia PDF Downloads 1085230 Seismic Performance of Benchmark Building Installed with Semi-Active Dampers
Authors: B. R. Raut
Abstract:
The seismic performance of 20-storey benchmark building with semi-active dampers is investigated under various earthquake ground motions. The Semi-Active Variable Friction Dampers (SAVFD) and Magnetorheological Dampers (MR) are used in this study. A recently proposed predictive control algorithm is employed for SAVFD and a simple mechanical model based on a Bouc–Wen element with clipped optimal control algorithm is employed for MR damper. A parametric study is carried out to ascertain the optimum parameters of the semi-active controllers, which yields the minimum performance indices of controlled benchmark building. The effectiveness of dampers is studied in terms of the reduction in structural responses and performance criteria. To minimize the cost of the dampers, the optimal location of the damper, rather than providing the dampers at all floors, is also investigated. The semi-active dampers installed in benchmark building effectively reduces the earthquake-induced responses. Lesser number of dampers at appropriate locations also provides comparable response of benchmark building, thereby reducing cost of dampers significantly. The effectiveness of two semi-active devices in mitigating seismic responses is cross compared. Among two semi-active devices majority of the performance criteria of MR dampers are lower than SAVFD installed with benchmark building. Thus the performance of the MR dampers is far better than SAVFD in reducing displacement, drift, acceleration and base shear of mid to high-rise building against seismic forces.Keywords: benchmark building, control strategy, input excitation, MR dampers, peak response, semi-active variable friction dampers
Procedia PDF Downloads 2855229 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis
Authors: Elcin Timur Cakmak, Ayse Oguzlar
Abstract:
This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.Keywords: classification algorithms, machine learning, sentiment analysis, Twitter
Procedia PDF Downloads 735228 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 1375227 The Effect of Foot Progression Angle on Human Lower Extremity
Authors: Sungpil Ha, Ju Yong Kang, Sangbaek Park, Seung-Ju Lee, Soo-Won Chae
Abstract:
The growing number of obese patients in aging societies has led to an increase in the number of patients with knee medial osteoarthritis (OA). Artificial joint insertion is the most common treatment for knee medial OA. Surgery is effective for patients with serious arthritic symptoms, but it is costly and dangerous. It is also inappropriate way to prevent a disease as an early stage. Therefore Non-operative treatments such as toe-in gait are proposed recently. Toe-in gait is one of non-surgical interventions, which restrain the progression of arthritis and relieves pain by reducing knee adduction moment (KAM) to facilitate lateral distribution of load on to knee medial cartilage. Numerous studies have measured KAM in various foot progression angle (FPA), and KAM data could be obtained by motion analysis. However, variations in stress at knee cartilage could not be directly observed or evaluated by these experiments of measuring KAM. Therefore, this study applied motion analysis to major gait points (1st peak, mid –stance, 2nd peak) with regard to FPA, and to evaluate the effects of FPA on the human lower extremity, the finite element (FE) method was employed. Three types of gait analysis (toe-in, toe-out, baseline gait) were performed with markers placed at the lower extremity. Ground reaction forces (GRF) were obtained by the force plates. The forces associated with the major muscles were computed using GRF and marker trajectory data. MRI data provided by the Visible Human Project were used to develop a human lower extremity FE model. FE analyses for three types of gait simulations were performed based on the calculated muscle force and GRF. We observed the maximum stress point during toe-in gait was lower than the other types, by comparing the results of FE analyses at the 1st peak across gait types. This is the same as the trend exhibited by KAM, measured through motion analysis in other papers. This indicates that the progression of knee medial OA could be suppressed by adopting toe-in gait. This study integrated motion analysis with FE analysis. One advantage of this method is that re-modeling is not required even with changes in posture. Therefore another type of gait simulation or various motions of lower extremity can be easily analyzed using this method.Keywords: finite element analysis, gait analysis, human model, motion capture
Procedia PDF Downloads 3365226 Topology Optimization of Heat and Mass Transfer for Two Fluids under Steady State Laminar Regime: Application on Heat Exchangers
Authors: Rony Tawk, Boutros Ghannam, Maroun Nemer
Abstract:
Topology optimization technique presents a potential tool for the design and optimization of structures involved in mass and heat transfer. The method starts with an initial intermediate domain and should be able to progressively distribute the solid and the two fluids exchanging heat. The multi-objective function of the problem takes into account minimization of total pressure loss and maximization of heat transfer between solid and fluid subdomains. Existing methods account for the presence of only one fluid, while the actual work extends optimization distribution of solid and two different fluids. This requires to separate the channels of both fluids and to ensure a minimum solid thickness between them. This is done by adding a third objective function to the multi-objective optimization problem. This article uses density approach where each cell holds two local design parameters ranging from 0 to 1, where the combination of their extremums defines the presence of solid, cold fluid or hot fluid in this cell. Finite volume method is used for direct solver coupled with a discrete adjoint approach for sensitivity analysis and method of moving asymptotes for numerical optimization. Several examples are presented to show the ability of the method to find a trade-off between minimization of power dissipation and maximization of heat transfer while ensuring the separation and continuity of the channel of each fluid without crossing or mixing the fluids. The main conclusion is the possibility to find an optimal bi-fluid domain using topology optimization, defining a fluid to fluid heat exchanger device.Keywords: topology optimization, density approach, bi-fluid domain, laminar steady state regime, fluid-to-fluid heat exchanger
Procedia PDF Downloads 3995225 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections
Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang
Abstract:
Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling
Procedia PDF Downloads 1585224 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls
Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin
Abstract:
The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis
Procedia PDF Downloads 2155223 Markowitz and Implementation of a Multi-Objective Evolutionary Technique Applied to the Colombia Stock Exchange (2009-2015)
Authors: Feijoo E. Colomine Duran, Carlos E. Peñaloza Corredor
Abstract:
There modeling component selection financial investment (Portfolio) a variety of problems that can be addressed with optimization techniques under evolutionary schemes. For his feature, the problem of selection of investment components of a dichotomous relationship between two elements that are opposed: The Portfolio Performance and Risk presented by choosing it. This relationship was modeled by Markowitz through a media problem (Performance) - variance (risk), ie must Maximize Performance and Minimize Risk. This research included the study and implementation of multi-objective evolutionary techniques to solve these problems, taking as experimental framework financial market equities Colombia Stock Exchange between 2009-2015. Comparisons three multiobjective evolutionary algorithms, namely the Nondominated Sorting Genetic Algorithm II (NSGA-II), the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Indicator-Based Selection in Multiobjective Search (IBEA) were performed using two measures well known performance: The Hypervolume indicator and R_2 indicator, also it became a nonparametric statistical analysis and the Wilcoxon rank-sum test. The comparative analysis also includes an evaluation of the financial efficiency of the investment portfolio chosen by the implementation of various algorithms through the Sharpe ratio. It is shown that the portfolio provided by the implementation of the algorithms mentioned above is very well located between the different stock indices provided by the Colombia Stock Exchange.Keywords: finance, optimization, portfolio, Markowitz, evolutionary algorithms
Procedia PDF Downloads 3025222 Non–Geometric Sensitivities Using the Adjoint Method
Authors: Marcelo Hayashi, João Lima, Bruno Chieregatti, Ernani Volpe
Abstract:
The adjoint method has been used as a successful tool to obtain sensitivity gradients in aerodynamic design and optimisation for many years. This work presents an alternative approach to the continuous adjoint formulation that enables one to compute gradients of a given measure of merit with respect to control parameters other than those pertaining to geometry. The procedure is then applied to the steady 2–D compressible Euler and incompressible Navier–Stokes flow equations. Finally, the results are compared with sensitivities obtained by finite differences and theoretical values for validation.Keywords: adjoint method, aerodynamics, sensitivity theory, non-geometric sensitivities
Procedia PDF Downloads 5475221 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model
Authors: Tanu Khanuja, Harikrishnan N. Unni
Abstract:
Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress
Procedia PDF Downloads 1635220 The Feasibility Evaluation Of The Compressed Air Energy Storage System In The Porous Media Reservoir
Authors: Ming-Hong Chen
Abstract:
In the study, the mechanical and financial feasibility for the compressed air energy storage (CAES) system in the porous media reservoir in Taiwan is evaluated. In 2035, Taiwan aims to install 16.7 GW of wind power and 40 GW of photovoltaic (PV) capacity. However, renewable energy sources often generate more electricity than needed, particularly during winter. Consequently, Taiwan requires long-term, large-scale energy storage systems to ensure the security and stability of its power grid. Currently, the primary large-scale energy storage options are Pumped Hydro Storage (PHS) and Compressed Air Energy Storage (CAES). Taiwan has not ventured into CAES-related technologies due to geological and cost constraints. However, with the imperative of achieving net-zero carbon emissions by 2050, there's a substantial need for the development of a considerable amount of renewable energy. PHS has matured, boasting an overall installed capacity of 4.68 GW. CAES, presenting a similar scale and power generation duration to PHS, is now under consideration. Taiwan's geological composition, being a porous medium unlike salt caves, introduces flow field resistance affecting gas injection and extraction. This study employs a program analysis model to establish the system performance analysis capabilities of CAES. The finite volume model is then used to assess the impact of porous media, and the findings are fed back into the system performance analysis for correction. Subsequently, the financial implications are calculated and compared with existing literature. For Taiwan, the strategic development of CAES technology is crucial, not only for meeting energy needs but also for decentralizing energy allocation, a feature of great significance in regions lacking alternative natural resources.Keywords: compressed-air energy storage, efficiency, porous media, financial feasibility
Procedia PDF Downloads 665219 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer
Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD
Procedia PDF Downloads 5075218 Designing an Exhaust Gas Energy Recovery Module Following Measurements Performed under Real Operating Conditions
Authors: Jerzy Merkisz, Pawel Fuc, Piotr Lijewski, Andrzej Ziolkowski, Pawel Czarkowski
Abstract:
The paper presents preliminary results of the development of an automotive exhaust gas energy recovery module. The aim of the performed analyses was to select the geometry of the heat exchanger that would ensure the highest possible transfer of heat at minimum heat flow losses. The starting point for the analyses was a straight portion of a pipe, from which the exhaust system of the tested vehicle was made. The design of the heat exchanger had a cylindrical cross-section, was 300 mm long and was fitted with a diffuser and a confusor. The model works were performed for the mentioned geometry utilizing the finite volume method based on the Ansys CFX v12.1 and v14 software. This method consisted in dividing of the system into small control volumes for which the exhaust gas velocity and pressure calculations were performed using the Navier-Stockes equations. The heat exchange in the system was modeled based on the enthalpy balance. The temperature growth resulting from the acting viscosity was not taken into account. The heat transfer on the fluid/solid boundary in the wall layer with the turbulent flow was done based on an arbitrarily adopted dimensionless temperature. The boundary conditions adopted in the analyses included the convective condition of heat transfer on the outer surface of the heat exchanger and the mass flow and temperature of the exhaust gas at the inlet. The mass flow and temperature of the exhaust gas were assumed based on the measurements performed in actual traffic using portable PEMS analyzers. The research object was a passenger vehicle fitted with a 1.9 dm3 85 kW diesel engine. The tests were performed in city traffic conditions.Keywords: waste heat recovery, heat exchanger, CFD simulation, pems
Procedia PDF Downloads 5745217 Optimizing 3D Shape Parameters of Sports Bra Pads in Motion by Finite Element Dynamic Modelling with Inverse Problem Solution
Authors: Jiazhen Chen, Yue Sun, Joanne Yip, Kit-Lun Yick
Abstract:
The design of sports bras poses a considerable challenge due to the difficulty in accurately predicting the wearing result after computer-aided design (CAD). It needs repeated physical try-on or virtual try-on to obtain a comfortable pressure range during motion. Specifically, in the context of running, the exact support area and force exerted on the breasts remain unclear. Consequently, obtaining an effective method to design the sports bra pads shape becomes particularly challenging. This predicament hinders the successful creation and production of sports bras that cater to women's health needs. The purpose of this study is to propose an effective method to obtain the 3D shape of sports bra pads and to understand the relationship between the supporting force and the 3D shape parameters of the pads. Firstly, the static 3D shape of the sports bra pad and human motion data (Running) are obtained by using the 3D scanner and advanced 4D scanning technology. The 3D shape of the sports bra pad is parameterised and simplified by Free-form Deformation (FFD). Then the sub-models of sports bra and human body are constructed by segmenting and meshing them with MSC Apex software. The material coefficient of sports bras is obtained by material testing. The Marc software is then utilised to establish a dynamic contact model between the human breast and the sports bra pad. To realise the reverse design of the sports bra pad, this contact model serves as a forward model for calculating the inverse problem. Based on the forward contact model, the inverse problem of the 3D shape parameters of the sports bra pad with the target bra-wearing pressure range as the boundary condition is solved. Finally, the credibility and accuracy of the simulation are validated by comparing the experimental results with the simulations by the FE model on the pressure distribution. On the one hand, this research allows for a more accurate understanding of the support area and force distribution on the breasts during running. On the other hand, this study can contribute to the customization of sports bra pads for different individuals. It can help to obtain sports bra pads with comfortable dynamic pressure.Keywords: sports bra design, breast motion, running, inverse problem, finite element dynamic model
Procedia PDF Downloads 595216 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing
Authors: S. Bouhouche, R. Drai, J. Bast
Abstract:
This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement
Procedia PDF Downloads 2835215 On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources
Authors: Hidouri Sami, Aguili Taoufik
Abstract:
We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These two solutions are compared to results from the reference bibliography.Keywords: method of auxiliary sources, scattering, large object, RCS, computational resources
Procedia PDF Downloads 2415214 Artificial Neural Network in Ultra-High Precision Grinding of Borosilicate-Crown Glass
Authors: Goodness Onwuka, Khaled Abou-El-Hossein
Abstract:
Borosilicate-crown (BK7) glass has found broad application in the optic and automotive industries and the growing demands for nanometric surface finishes is becoming a necessity in such applications. Thus, it has become paramount to optimize the parameters influencing the surface roughness of this precision lens. The research was carried out on a 4-axes Nanoform 250 precision lathe machine with an ultra-high precision grinding spindle. The experiment varied the machining parameters of feed rate, wheel speed and depth of cut at three levels for different combinations using Box Behnken design of experiment and the resulting surface roughness values were measured using a Taylor Hobson Dimension XL optical profiler. Acoustic emission monitoring technique was applied at a high sampling rate to monitor the machining process while further signal processing and feature extraction methods were implemented to generate the input to a neural network algorithm. This paper highlights the training and development of a back propagation neural network prediction algorithm through careful selection of parameters and the result show a better classification accuracy when compared to a previously developed response surface model with very similar machining parameters. Hence artificial neural network algorithms provide better surface roughness prediction accuracy in the ultra-high precision grinding of BK7 glass.Keywords: acoustic emission technique, artificial neural network, surface roughness, ultra-high precision grinding
Procedia PDF Downloads 3055213 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study
Authors: Atif Zafar, Fan Haijun
Abstract:
A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.Keywords: field development plan, reservoir characterization, reservoir engineering, well test analysis
Procedia PDF Downloads 3645212 Stress Concentration and Strength Prediction of Carbon/Epoxy Composites
Authors: Emre Ozaslan, Bulent Acar, Mehmet Ali Guler
Abstract:
Unidirectional composites are very popular structural materials used in aerospace, marine, energy and automotive industries thanks to their superior material properties. However, the mechanical behavior of composite materials is more complicated than isotropic materials because of their anisotropic nature. Also, a stress concentration availability on the structure, like a hole, makes the problem further complicated. Therefore, enormous number of tests require to understand the mechanical behavior and strength of composites which contain stress concentration. Accurate finite element analysis and analytical models enable to understand mechanical behavior and predict the strength of composites without enormous number of tests which cost serious time and money. In this study, unidirectional Carbon/Epoxy composite specimens with central circular hole were investigated in terms of stress concentration factor and strength prediction. The composite specimens which had different specimen wide (W) to hole diameter (D) ratio were tested to investigate the effect of hole size on the stress concentration and strength. Also, specimens which had same specimen wide to hole diameter ratio, but varied sizes were tested to investigate the size effect. Finite element analysis was performed to determine stress concentration factor for all specimen configurations. For quasi-isotropic laminate, it was found that the stress concentration factor increased approximately %15 with decreasing of W/D ratio from 6 to 3. Point stress criteria (PSC), inherent flaw method and progressive failure analysis were compared in terms of predicting the strength of specimens. All methods could predict the strength of specimens with maximum %8 error. PSC was better than other methods for high values of W/D ratio, however, inherent flaw method was successful for low values of W/D. Also, it is seen that increasing by 4 times of the W/D ratio rises the failure strength of composite specimen as %62.4. For constant W/D ratio specimens, all the strength prediction methods were more successful for smaller size specimens than larger ones. Increasing the specimen width and hole diameter together by 2 times reduces the specimen failure strength as %13.2.Keywords: failure, strength, stress concentration, unidirectional composites
Procedia PDF Downloads 1565211 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization
Authors: Aitor Bilbao, Dragos Axinte, John Billingham
Abstract:
The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation
Procedia PDF Downloads 2755210 Explicit Chain Homotopic Function to Compute Hochschild Homology of the Polynomial Algebra
Authors: Zuhier Altawallbeh
Abstract:
In this paper, an explicit homotopic function is constructed to compute the Hochschild homology of a finite dimensional free k-module V. Because the polynomial algebra is of course fundamental in the computation of the Hochschild homology HH and the cyclic homology CH of commutative algebras, we concentrate our work to compute HH of the polynomial algebra.by providing certain homotopic function.Keywords: hochschild homology, homotopic function, free and projective modules, free resolution, exterior algebra, symmetric algebra
Procedia PDF Downloads 4055209 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters
Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher
Abstract:
The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with aKeywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy
Procedia PDF Downloads 1265208 Insight into Enhancement of CO2 Capture by Clay Minerals
Authors: Mardin Abdalqadir, Paul Adzakro, Tannaz Pak, Sina Rezaei Gomari
Abstract:
Climate change and global warming recently became significant concerns due to the massive emissions of greenhouse gases into the atmosphere, predominantly CO2 gases. Therefore, it is necessary to find sustainable and inexpensive methods to capture the greenhouse gasses and protect the environment for live species. The application of naturally available and cheap adsorbents of carbon such as clay minerals became a great interest. However, the minerals prone to low storage capacity despite their high affinity to adsorb carbon. This paper aims to explore ways to improve the pore volume and surface area of two selected clay minerals, ‘montmorillonite and kaolinite’ by acid treatment to overcome their low storage capacity. Montmorillonite and kaolinite samples were treated with different sulfuric acid concentrations (0.5, 1.2 and 2.5 M) at 40 °C for 8 hours to achieve the above aim. The grain size distribution and morphology of clay minerals before and after acid treatment were explored with Scanning Electron Microscope to evaluate surface area improvement. The ImageJ software was used to find the porosity and pore volume of treated and untreated clay samples. The structure of the clay minerals was also analyzed using an X-ray Diffraction machine. The results showed that the pore volume and surface area were increased substantially through acid treatment, which speeded up the rate of carbon dioxide adsorption. XRD pattern of kaolinite did not change after sulfuric acid treatment, which indicates that acid treatment would not affect the structure of kaolinite. It was also discovered that kaolinite had a higher pore volume and porosity than montmorillonite before and after acid treatment. For example, the pore volume of untreated kaolinite was equal to 30.498 um3 with a porosity of 23.49%. Raising the concentration of acid from 0.5 M to 2.5 M in 8 hours’ time reaction led to increased pore volume from 30.498 um3 to 34.73 um3. The pore volume of raw montmorillonite was equal to 15.610 um3 with a porosity of 12.7%. When the acid concentration was raised from 0.5 M to 2.5 M for the same reaction time, pore volume also increased from 15.610 um3 to 20.538 um3. However, montmorillonite had a higher specific surface area than kaolinite. This study concludes that clay minerals are inexpensive and available material sources to model the realistic conditions and apply the results of carbon capture to prevent global warming, which is one of the most critical and urgent problems in the world.Keywords: acid treatment, kaolinite, montmorillonite, pore volume, porosity, surface area
Procedia PDF Downloads 1695207 Optimization by Means of Genetic Algorithm of the Equivalent Electrical Circuit Model of Different Order for Li-ion Battery Pack
Authors: V. Pizarro-Carmona, S. Castano-Solis, M. Cortés-Carmona, J. Fraile-Ardanuy, D. Jimenez-Bermejo
Abstract:
The purpose of this article is to optimize the Equivalent Electric Circuit Model (EECM) of different orders to obtain greater precision in the modeling of Li-ion battery packs. Optimization includes considering circuits based on 1RC, 2RC and 3RC networks, with a dependent voltage source and a series resistor. The parameters are obtained experimentally using tests in the time domain and in the frequency domain. Due to the high non-linearity of the behavior of the battery pack, Genetic Algorithm (GA) was used to solve and optimize the parameters of each EECM considered (1RC, 2RC and 3RC). The objective of the estimation is to minimize the mean square error between the measured impedance in the real battery pack and those generated by the simulation of different proposed circuit models. The results have been verified by comparing the Nyquist graphs of the estimation of the complex impedance of the pack. As a result of the optimization, the 2RC and 3RC circuit alternatives are considered as viable to represent the battery behavior. These battery pack models are experimentally validated using a hardware-in-the-loop (HIL) simulation platform that reproduces the well-known New York City cycle (NYCC) and Federal Test Procedure (FTP) driving cycles for electric vehicles. The results show that using GA optimization allows obtaining EECs with 2RC or 3RC networks, with high precision to represent the dynamic behavior of a battery pack in vehicular applications.Keywords: Li-ion battery packs modeling optimized, EECM, GA, electric vehicle applications
Procedia PDF Downloads 1255206 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels
Authors: Foad Hassaninejadafarahani, Scott Ormiston
Abstract:
Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number
Procedia PDF Downloads 3645205 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm
Authors: Muhammad Bilal, Zhongfeng Qiu
Abstract:
Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.Keywords: AEORNET, AOD, SARA, GOCI, Beijing
Procedia PDF Downloads 1715204 Control of Base Isolated Benchmark using Combined Control Strategy with Fuzzy Algorithm Subjected to Near-Field Earthquakes
Authors: Hashem Shariatmadar, Mozhgansadat Momtazdargahi
Abstract:
The purpose of control structure against earthquake is to dissipate earthquake input energy to the structure and reduce the plastic deformation of structural members. There are different methods for control structure against earthquake to reduce the structure response that they are active, semi-active, inactive and hybrid. In this paper two different combined control systems are used first system comprises base isolator and multi tuned mass dampers (BI & MTMD) and another combination is hybrid base isolator and multi tuned mass dampers (HBI & MTMD) for controlling an eight story isolated benchmark steel structure. Active control force of hybrid isolator is estimated by fuzzy logic algorithms. The influences of the combined systems on the responses of the benchmark structure under the two near-field earthquake (Newhall & Elcentro) are evaluated by nonlinear dynamic time history analysis. Applications of combined control systems consisting of passive or active systems installed in parallel to base-isolation bearings have the capability of reducing response quantities of base-isolated (relative and absolute displacement) structures significantly. Therefore in design and control of irregular isolated structures using the proposed control systems, structural demands (relative and absolute displacement and etc.) in each direction must be considered separately.Keywords: base-isolated benchmark structure, multi-tuned mass dampers, hybrid isolators, near-field earthquake, fuzzy algorithm
Procedia PDF Downloads 304